Unnamed: 0,id,type,created_at,repo,repo_url,action,title,labels,body,index,text_combine,label,text,binary_label 1186,5102867002.0,IssuesEvent,2017-01-04 19:37:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_group.py and --diff,affects_2.1 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: ec2_group.py ##### Ansible Version: ``` ansible 2.1.0 (devel 33f96edcd0) last updated 2016/03/10 14:52:58 (GMT -400) lib/ansible/modules/core: (detached HEAD c86a0ef84a) last updated 2016/03/10 14:53:03 (GMT -400) lib/ansible/modules/extras: (detached HEAD 33a557cc59) last updated 2016/03/10 14:53:03 (GMT -400) config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: none ##### Environment: N/A ##### Summary: ec2_group.py doesn't do anything useful with --diff, and it'd be great if it did (with or without --check as well); without it, we have to resort to before-and-after captures of 'aws ec2 describe-security-groups' output, and even that only tells what changed after the change was already made. ##### Steps To Reproduce: Adding --diff to a playbook that uses ec2_group.py would show output indicating what was changed (or would be changed, with --check) -- perhaps 'diff' style, perhaps some other format if 'diff' style doesn't really make sense. (I don't have a specific idea in mind, but would be happy to help come up with something and/or comment on something that others come up with.) ##### Expected Results: Some sort of output indicating what had change (or would changed). ##### Actual Results: No change in output from when the playbook is run without --diff. ",True,"ec2_group.py and --diff - ##### Issue Type: - Feature Idea ##### Plugin Name: ec2_group.py ##### Ansible Version: ``` ansible 2.1.0 (devel 33f96edcd0) last updated 2016/03/10 14:52:58 (GMT -400) lib/ansible/modules/core: (detached HEAD c86a0ef84a) last updated 2016/03/10 14:53:03 (GMT -400) lib/ansible/modules/extras: (detached HEAD 33a557cc59) last updated 2016/03/10 14:53:03 (GMT -400) config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: none ##### Environment: N/A ##### Summary: ec2_group.py doesn't do anything useful with --diff, and it'd be great if it did (with or without --check as well); without it, we have to resort to before-and-after captures of 'aws ec2 describe-security-groups' output, and even that only tells what changed after the change was already made. ##### Steps To Reproduce: Adding --diff to a playbook that uses ec2_group.py would show output indicating what was changed (or would be changed, with --check) -- perhaps 'diff' style, perhaps some other format if 'diff' style doesn't really make sense. (I don't have a specific idea in mind, but would be happy to help come up with something and/or comment on something that others come up with.) ##### Expected Results: Some sort of output indicating what had change (or would changed). ##### Actual Results: No change in output from when the playbook is run without --diff. ",1, group py and diff issue type feature idea plugin name group py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides ansible configuration none environment n a summary group py doesn t do anything useful with diff and it d be great if it did with or without check as well without it we have to resort to before and after captures of aws describe security groups output and even that only tells what changed after the change was already made steps to reproduce adding diff to a playbook that uses group py would show output indicating what was changed or would be changed with check perhaps diff style perhaps some other format if diff style doesn t really make sense i don t have a specific idea in mind but would be happy to help come up with something and or comment on something that others come up with expected results some sort of output indicating what had change or would changed actual results no change in output from when the playbook is run without diff ,1 997,4760612398.0,IssuesEvent,2016-10-25 03:57:51,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,copy module should preserve mode of src file when mode is not explicitly set,affects_1.9 bug_report feature_idea in progress waiting_on_maintainer,"##### Issue Type: Bug Report ##### Component Name: copy module ##### Ansible Version: 1.9.0.1 ##### Environment: *ix ##### Summary: When using the copy module to copy multiple files to their destination, it would be really useful if the mode of the source was preserved. This would allow copying several files in a list without explicitly setting modes for each one. In any case, this is the expected behavior when going by the documentation (which does not specify a default for mode -- leading one to assume the mode would be preserved unless explicitly specified). ##### Steps To Reproduce: - name: copy test copy: src={{ item }} dest=/destination owner=root group=root with_items: - test1.sh - test1.conf with test1.sh locally with 0755 permissions and test1.conf locally with 0644 permissions ##### Expected Results: test1.sh on target with 0755 permissions and test1.conf on target with 0644 permissions ##### Actual Results: test1.sh and test1.conf on target with 0644 permissions. If you were to want to maintain the current behavior for backwards compatibility, the default mode should at least be documented (with possibly a recommendation to look at the synchronize module instead). Alternatively this could be done with something like - name: copy test copy: src={{ item }} dest=/destination owner=root group=root mode=preserve with_items: - test1.sh - test1.conf ",True,"copy module should preserve mode of src file when mode is not explicitly set - ##### Issue Type: Bug Report ##### Component Name: copy module ##### Ansible Version: 1.9.0.1 ##### Environment: *ix ##### Summary: When using the copy module to copy multiple files to their destination, it would be really useful if the mode of the source was preserved. This would allow copying several files in a list without explicitly setting modes for each one. In any case, this is the expected behavior when going by the documentation (which does not specify a default for mode -- leading one to assume the mode would be preserved unless explicitly specified). ##### Steps To Reproduce: - name: copy test copy: src={{ item }} dest=/destination owner=root group=root with_items: - test1.sh - test1.conf with test1.sh locally with 0755 permissions and test1.conf locally with 0644 permissions ##### Expected Results: test1.sh on target with 0755 permissions and test1.conf on target with 0644 permissions ##### Actual Results: test1.sh and test1.conf on target with 0644 permissions. If you were to want to maintain the current behavior for backwards compatibility, the default mode should at least be documented (with possibly a recommendation to look at the synchronize module instead). Alternatively this could be done with something like - name: copy test copy: src={{ item }} dest=/destination owner=root group=root mode=preserve with_items: - test1.sh - test1.conf ",1,copy module should preserve mode of src file when mode is not explicitly set issue type bug report component name copy module ansible version environment ix summary when using the copy module to copy multiple files to their destination it would be really useful if the mode of the source was preserved this would allow copying several files in a list without explicitly setting modes for each one in any case this is the expected behavior when going by the documentation which does not specify a default for mode leading one to assume the mode would be preserved unless explicitly specified steps to reproduce name copy test copy src item dest destination owner root group root with items sh conf with sh locally with permissions and conf locally with permissions expected results sh on target with permissions and conf on target with permissions actual results sh and conf on target with permissions if you were to want to maintain the current behavior for backwards compatibility the default mode should at least be documented with possibly a recommendation to look at the synchronize module instead alternatively this could be done with something like name copy test copy src item dest destination owner root group root mode preserve with items sh conf ,1 1716,6574472161.0,IssuesEvent,2017-09-11 13:01:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Windows Setup Module Doesn't Work If Running As SYSTEM,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Windows setup module (setup.ps1) ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION Not applicable. ##### OS / ENVIRONMENT Not applicable. ##### SUMMARY In commit 6225614d5fa73a9182e7b7fa8d8d7d8fa8e00c56 (PR #3777), additional properties were to the the facts that Ansible gathers for Windows to bring them more in line with what Unix machines return for facts. In particular, this line appears assumes that the user account has an account domain sid: ```powershell Set-Attr $result.ansible_facts ""ansible_machine_id"" $user.User.AccountDomainSid.Value ``` However, with custom transports, an ansible command may be executed with an account that *doesn't* have a domain sid, such as a custom transport that uses an agent installed to execute commands directly as `SYSTEM`. Since system doesn't have a an AccountDomainSid, it is unable to fetch a Value for it, and we promptly die. I would be happy to contribute a patch for this, I'm just not sure about who else is depending on this `ansible_machine_id` property (and really what people are using it for tbh). ##### STEPS TO REPRODUCE I'm currently executing via a custom transport (it's closed source right now, but will be pushing upstream as soon as I can) that executes commands as SYSTEM. To reproduce this, execute the setup module as SYSTEM. ##### EXPECTED RESULTS I expected everything to work properly. ##### ACTUAL RESULTS The setup module fails and we're seeing the following (sorry about the formatting): ``` {""exception"":""At C:\\Windows\\TEMP\\ansible-tmp-1478196574.65-49351933792000\\setup.ps1:363 char:1\r\n+ Set-Attr $result.ansible_facts \\""ansible_machine_id\\"" $user.User.AccountDomainSid. ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"",""msg"":""The property \\u0027Value\\u0027 cannot be found on this object. Verify that the property exists."",""failed"":true} ``` ",True,"Windows Setup Module Doesn't Work If Running As SYSTEM - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Windows setup module (setup.ps1) ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION Not applicable. ##### OS / ENVIRONMENT Not applicable. ##### SUMMARY In commit 6225614d5fa73a9182e7b7fa8d8d7d8fa8e00c56 (PR #3777), additional properties were to the the facts that Ansible gathers for Windows to bring them more in line with what Unix machines return for facts. In particular, this line appears assumes that the user account has an account domain sid: ```powershell Set-Attr $result.ansible_facts ""ansible_machine_id"" $user.User.AccountDomainSid.Value ``` However, with custom transports, an ansible command may be executed with an account that *doesn't* have a domain sid, such as a custom transport that uses an agent installed to execute commands directly as `SYSTEM`. Since system doesn't have a an AccountDomainSid, it is unable to fetch a Value for it, and we promptly die. I would be happy to contribute a patch for this, I'm just not sure about who else is depending on this `ansible_machine_id` property (and really what people are using it for tbh). ##### STEPS TO REPRODUCE I'm currently executing via a custom transport (it's closed source right now, but will be pushing upstream as soon as I can) that executes commands as SYSTEM. To reproduce this, execute the setup module as SYSTEM. ##### EXPECTED RESULTS I expected everything to work properly. ##### ACTUAL RESULTS The setup module fails and we're seeing the following (sorry about the formatting): ``` {""exception"":""At C:\\Windows\\TEMP\\ansible-tmp-1478196574.65-49351933792000\\setup.ps1:363 char:1\r\n+ Set-Attr $result.ansible_facts \\""ansible_machine_id\\"" $user.User.AccountDomainSid. ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"",""msg"":""The property \\u0027Value\\u0027 cannot be found on this object. Verify that the property exists."",""failed"":true} ``` ",1,windows setup module doesn t work if running as system issue type bug report component name windows setup module setup ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables not applicable os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific not applicable summary in commit pr additional properties were to the the facts that ansible gathers for windows to bring them more in line with what unix machines return for facts in particular this line appears assumes that the user account has an account domain sid powershell set attr result ansible facts ansible machine id user user accountdomainsid value however with custom transports an ansible command may be executed with an account that doesn t have a domain sid such as a custom transport that uses an agent installed to execute commands directly as system since system doesn t have a an accountdomainsid it is unable to fetch a value for it and we promptly die i would be happy to contribute a patch for this i m just not sure about who else is depending on this ansible machine id property and really what people are using it for tbh steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i m currently executing via a custom transport it s closed source right now but will be pushing upstream as soon as i can that executes commands as system to reproduce this execute the setup module as system expected results i expected everything to work properly actual results the setup module fails and we re seeing the following sorry about the formatting exception at c windows temp ansible tmp setup char r n set attr result ansible facts ansible machine id user user accountdomainsid r n msg the property cannot be found on this object verify that the property exists failed true ,1 1881,6577510856.0,IssuesEvent,2017-09-12 01:25:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,setting iam_role state to active or inactive fails with python UnboundLocal error,affects_2.0 aws bug_report cloud P2 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME iam module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OS X 10.11.3 ##### SUMMARY It seems to be impossible to deactivate or revoke AWS access keys via Ansible. ##### STEPS TO REPRODUCE - Attempt to revoke or make inactive an IAM Access Key via the following playbook: ``` --- - hosts: all tasks: - name: Create TSE users with approrpiate group and fetch IAM keys iam: iam_type: user name: ""joe.user"" state: present access_key_state: ""inactive"" access_key_ids: - ""AKIABRACADABRA"" groups: ""DEVUSER"" profile: dev register: newusers - debug: var=newusers ``` Run with: `ansible-playbook -i ""localhost,"" -c local ./tsecreds.yaml -vvvv` ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269919.04-56226139783789/iam"", line 2920, in main() File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269919.04-56226139783789/iam"", line 554, in main if any([n in key_state for n in ['active', 'inactive']]) and not key_ids: UnboundLocalError: local variable 'key_ids' referenced before assignment ``` This happens for key states of ""Active,Inactive"". Using ""Remove"" will provide this error: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""access_key_ids"": [""AKIABRACADABRA""], ""access_key_state"": ""remove"", ""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""groups"": [""TSE""], ""iam_type"": ""user"", ""key_count"": 1, ""name"": ""joe.user"", ""new_name"": null, ""new_path"": null, ""password"": null, ""path"": ""/"", ""profile"": ""dev"", ""region"": null, ""security_token"": null, ""state"": ""present"", ""update_password"": ""always"", ""validate_certs"": true}, ""module_name"": ""iam""}, ""msg"": ""BotoServerError: 400 Bad Request\n\n \n Sender\n ValidationError\n 1 validation error detected: Value 'Remove' at 'status' failed to satisfy constraint: Member must satisfy enum value set: [Active, Inactive]\n \n 079855bd-f5ce-11e5-88c6-23b4e6cc6158\n\n""} ``` ##### EXPECTED RESULTS I expected 'remove' to remove the keys entirely without first having to run a 'inactive' operation on them (is that instead a feature request?) and I expected the 'inactive' and 'active' operations to work as expected instead of throwing a Python error. Additionally, when running 'create' against a user without any keys, I expected the keys to be available in the debug information after I registered the variable. ##### ACTUAL RESULTS See above inline in 'steps to reproduce' Full output below: ### access_key_state: inactive ``` C02KN05HFFT3:oneoff-tsecreds karl.katzke$ ansible-playbook -i ""localhost,"" -c local ./tsecreds.yaml -vvv No config file found; using defaults 1 plays in ./tsecreds.yaml PLAY *************************************************************************** TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248 `"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmpfBc49B TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248/setup localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248/setup; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248/"" > /dev/null 2>&1' ok: [localhost] TASK [Create TSE users with approrpiate group and fetch IAM keys] ************** task path: /Users/karl.katzke/Work/ansible/oneoff-tsecreds/tsecreds.yaml:4 ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406 `"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmpnY3azX TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam"", line 2920, in main() File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam"", line 554, in main if any([n in key_state for n in ['active', 'inactive']]) and not key_ids: UnboundLocalError: local variable 'key_ids' referenced before assignment fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""iam""}, ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @./tsecreds.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ### access_key_state: remove C02KN05HFFT3:oneoff-tsecreds karl.katzke$ ansible-playbook -i ""localhost,"" -c local ./tsecreds.yaml -vvv No config file found; using defaults 1 plays in ./tsecreds.yaml PLAY *************************************************************************** TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273`"" && echo ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273`"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmprZfcCw TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273/setup localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273/setup; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273/"" > /dev/null 2>&1' ok: [localhost] TASK [Create TSE users with approrpiate group and fetch IAM keys] ************** task path: /Users/karl.katzke/Work/ansible/oneoff-tsecreds/tsecreds.yaml:4 ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934`"" && echo ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934`"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmpHq1d7t TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934/iam localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934/iam; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934/"" > /dev/null 2>&1' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""access_key_ids"": [""AKIAI3BJJ6AEJCLR5MSA""], ""access_key_state"": ""remove"", ""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""groups"": [""TSE""], ""iam_type"": ""user"", ""key_count"": 1, ""name"": ""brian.outlaw"", ""new_name"": null, ""new_path"": null, ""password"": null, ""path"": ""/"", ""profile"": ""dev"", ""region"": null, ""security_token"": null, ""state"": ""present"", ""update_password"": ""always"", ""validate_certs"": true}, ""module_name"": ""iam""}, ""msg"": ""BotoServerError: 400 Bad Request\n\n \n Sender\n ValidationError\n 1 validation error detected: Value 'Remove' at 'status' failed to satisfy constraint: Member must satisfy enum value set: [Active, Inactive]\n \n 079855bd-f5ce-11e5-88c6-23b4e6cc6158\n\n""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @./tsecreds.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ",True,"setting iam_role state to active or inactive fails with python UnboundLocal error - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME iam module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OS X 10.11.3 ##### SUMMARY It seems to be impossible to deactivate or revoke AWS access keys via Ansible. ##### STEPS TO REPRODUCE - Attempt to revoke or make inactive an IAM Access Key via the following playbook: ``` --- - hosts: all tasks: - name: Create TSE users with approrpiate group and fetch IAM keys iam: iam_type: user name: ""joe.user"" state: present access_key_state: ""inactive"" access_key_ids: - ""AKIABRACADABRA"" groups: ""DEVUSER"" profile: dev register: newusers - debug: var=newusers ``` Run with: `ansible-playbook -i ""localhost,"" -c local ./tsecreds.yaml -vvvv` ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269919.04-56226139783789/iam"", line 2920, in main() File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269919.04-56226139783789/iam"", line 554, in main if any([n in key_state for n in ['active', 'inactive']]) and not key_ids: UnboundLocalError: local variable 'key_ids' referenced before assignment ``` This happens for key states of ""Active,Inactive"". Using ""Remove"" will provide this error: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""access_key_ids"": [""AKIABRACADABRA""], ""access_key_state"": ""remove"", ""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""groups"": [""TSE""], ""iam_type"": ""user"", ""key_count"": 1, ""name"": ""joe.user"", ""new_name"": null, ""new_path"": null, ""password"": null, ""path"": ""/"", ""profile"": ""dev"", ""region"": null, ""security_token"": null, ""state"": ""present"", ""update_password"": ""always"", ""validate_certs"": true}, ""module_name"": ""iam""}, ""msg"": ""BotoServerError: 400 Bad Request\n\n \n Sender\n ValidationError\n 1 validation error detected: Value 'Remove' at 'status' failed to satisfy constraint: Member must satisfy enum value set: [Active, Inactive]\n \n 079855bd-f5ce-11e5-88c6-23b4e6cc6158\n\n""} ``` ##### EXPECTED RESULTS I expected 'remove' to remove the keys entirely without first having to run a 'inactive' operation on them (is that instead a feature request?) and I expected the 'inactive' and 'active' operations to work as expected instead of throwing a Python error. Additionally, when running 'create' against a user without any keys, I expected the keys to be available in the debug information after I registered the variable. ##### ACTUAL RESULTS See above inline in 'steps to reproduce' Full output below: ### access_key_state: inactive ``` C02KN05HFFT3:oneoff-tsecreds karl.katzke$ ansible-playbook -i ""localhost,"" -c local ./tsecreds.yaml -vvv No config file found; using defaults 1 plays in ./tsecreds.yaml PLAY *************************************************************************** TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248 `"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmpfBc49B TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248/setup localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248/setup; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269887.0-254944181699248/"" > /dev/null 2>&1' ok: [localhost] TASK [Create TSE users with approrpiate group and fetch IAM keys] ************** task path: /Users/karl.katzke/Work/ansible/oneoff-tsecreds/tsecreds.yaml:4 ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406 `"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmpnY3azX TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam"", line 2920, in main() File ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459269890.51-92058580067406/iam"", line 554, in main if any([n in key_state for n in ['active', 'inactive']]) and not key_ids: UnboundLocalError: local variable 'key_ids' referenced before assignment fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""iam""}, ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @./tsecreds.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ### access_key_state: remove C02KN05HFFT3:oneoff-tsecreds karl.katzke$ ansible-playbook -i ""localhost,"" -c local ./tsecreds.yaml -vvv No config file found; using defaults 1 plays in ./tsecreds.yaml PLAY *************************************************************************** TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273`"" && echo ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273`"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmprZfcCw TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273/setup localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273/setup; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270090.04-124925881060273/"" > /dev/null 2>&1' ok: [localhost] TASK [Create TSE users with approrpiate group and fetch IAM keys] ************** task path: /Users/karl.katzke/Work/ansible/oneoff-tsecreds/tsecreds.yaml:4 ESTABLISH LOCAL CONNECTION FOR USER: karl.katzke localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934`"" && echo ""`echo $HOME/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934`"" )' localhost PUT /var/folders/tp/mry8ntpn6kj551z091w9bl0d_tl3s2/T/tmpHq1d7t TO /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934/iam localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934/iam; rm -rf ""/Users/karl.katzke/.ansible/tmp/ansible-tmp-1459270093.31-143517777805934/"" > /dev/null 2>&1' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""access_key_ids"": [""AKIAI3BJJ6AEJCLR5MSA""], ""access_key_state"": ""remove"", ""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""groups"": [""TSE""], ""iam_type"": ""user"", ""key_count"": 1, ""name"": ""brian.outlaw"", ""new_name"": null, ""new_path"": null, ""password"": null, ""path"": ""/"", ""profile"": ""dev"", ""region"": null, ""security_token"": null, ""state"": ""present"", ""update_password"": ""always"", ""validate_certs"": true}, ""module_name"": ""iam""}, ""msg"": ""BotoServerError: 400 Bad Request\n\n \n Sender\n ValidationError\n 1 validation error detected: Value 'Remove' at 'status' failed to satisfy constraint: Member must satisfy enum value set: [Active, Inactive]\n \n 079855bd-f5ce-11e5-88c6-23b4e6cc6158\n\n""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @./tsecreds.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ",1,setting iam role state to active or inactive fails with python unboundlocal error issue type bug report component name iam module ansible version ansible config file configured module search path default w o overrides os environment os x summary it seems to be impossible to deactivate or revoke aws access keys via ansible steps to reproduce attempt to revoke or make inactive an iam access key via the following playbook hosts all tasks name create tse users with approrpiate group and fetch iam keys iam iam type user name joe user state present access key state inactive access key ids akiabracadabra groups devuser profile dev register newusers debug var newusers run with ansible playbook i localhost c local tsecreds yaml vvvv an exception occurred during task execution the full traceback is traceback most recent call last file users karl katzke ansible tmp ansible tmp iam line in main file users karl katzke ansible tmp ansible tmp iam line in main if any and not key ids unboundlocalerror local variable key ids referenced before assignment this happens for key states of active inactive using remove will provide this error fatal failed changed false failed true invocation module args access key ids access key state remove aws access key null aws secret key null url null groups iam type user key count name joe user new name null new path null password null path profile dev region null security token null state present update password always validate certs true module name iam msg botoservererror bad request n n sender n validationerror n validation error detected value remove at status failed to satisfy constraint member must satisfy enum value set n n n n expected results i expected remove to remove the keys entirely without first having to run a inactive operation on them is that instead a feature request and i expected the inactive and active operations to work as expected instead of throwing a python error additionally when running create against a user without any keys i expected the keys to be available in the debug information after i registered the variable actual results see above inline in steps to reproduce full output below access key state inactive oneoff tsecreds karl katzke ansible playbook i localhost c local tsecreds yaml vvv no config file found using defaults plays in tsecreds yaml play task establish local connection for user karl katzke localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders tp t to users karl katzke ansible tmp ansible tmp setup localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users karl katzke ansible tmp ansible tmp setup rm rf users karl katzke ansible tmp ansible tmp dev null ok task task path users karl katzke work ansible oneoff tsecreds tsecreds yaml establish local connection for user karl katzke localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders tp t to users karl katzke ansible tmp ansible tmp iam localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users karl katzke ansible tmp ansible tmp iam rm rf users karl katzke ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file users karl katzke ansible tmp ansible tmp iam line in main file users karl katzke ansible tmp ansible tmp iam line in main if any and not key ids unboundlocalerror local variable key ids referenced before assignment fatal failed changed false failed true invocation module name iam parsed false no more hosts left to retry use limit tsecreds retry play recap localhost ok changed unreachable failed access key state remove oneoff tsecreds karl katzke ansible playbook i localhost c local tsecreds yaml vvv no config file found using defaults plays in tsecreds yaml play task establish local connection for user karl katzke localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders tp t tmprzfccw to users karl katzke ansible tmp ansible tmp setup localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users karl katzke ansible tmp ansible tmp setup rm rf users karl katzke ansible tmp ansible tmp dev null ok task task path users karl katzke work ansible oneoff tsecreds tsecreds yaml establish local connection for user karl katzke localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders tp t to users karl katzke ansible tmp ansible tmp iam localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users karl katzke ansible tmp ansible tmp iam rm rf users karl katzke ansible tmp ansible tmp dev null fatal failed changed false failed true invocation module args access key ids access key state remove aws access key null aws secret key null url null groups iam type user key count name brian outlaw new name null new path null password null path profile dev region null security token null state present update password always validate certs true module name iam msg botoservererror bad request n n sender n validationerror n validation error detected value remove at status failed to satisfy constraint member must satisfy enum value set n n n n no more hosts left to retry use limit tsecreds retry play recap localhost ok changed unreachable failed ,1 1000,4769875134.0,IssuesEvent,2016-10-26 13:53:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Key does not seem to have been added (but it has),affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `apt_key` ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY The module says it fails but the key has successfully been added on servers ##### STEPS TO REPRODUCE ``` - name: add key for yarn package apt_key: keyserver: pgp.mit.edu id: D101F7899D41F3C3 #temp fix because of the ""key does not seem to have been added"" #ignore_errors: yes become: yes become_user: root ``` ##### EXPECTED RESULTS Changed or OK output ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""failed"": true, ""id"": ""9D41F3C3"", ""msg"": ""key does not seem to have been added""} ``` But when running the command `apt-key list`, **the right key for yarn is listed**. ",True,"Key does not seem to have been added (but it has) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `apt_key` ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY The module says it fails but the key has successfully been added on servers ##### STEPS TO REPRODUCE ``` - name: add key for yarn package apt_key: keyserver: pgp.mit.edu id: D101F7899D41F3C3 #temp fix because of the ""key does not seem to have been added"" #ignore_errors: yes become: yes become_user: root ``` ##### EXPECTED RESULTS Changed or OK output ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""failed"": true, ""id"": ""9D41F3C3"", ""msg"": ""key does not seem to have been added""} ``` But when running the command `apt-key list`, **the right key for yarn is listed**. ",1,key does not seem to have been added but it has issue type bug report component name apt key ansible version ansible os environment n a summary the module says it fails but the key has successfully been added on servers steps to reproduce name add key for yarn package apt key keyserver pgp mit edu id temp fix because of the key does not seem to have been added ignore errors yes become yes become user root expected results changed or ok output actual results failed changed false failed true id msg key does not seem to have been added but when running the command apt key list the right key for yarn is listed ,1 1282,5412022697.0,IssuesEvent,2017-03-01 13:29:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Force user creation to be local,affects_2.3 feature_idea waiting_on_maintainer,"Sometimes LDAP/NIS user names conflict with names we want to have local (like mysql, postgres). I'm worked around it other ways, but was told recently about luseradd. I wonder if there would be interest in adding a force=yes to the user module such that is uses luseradd instead of useradd. ",True,"Force user creation to be local - Sometimes LDAP/NIS user names conflict with names we want to have local (like mysql, postgres). I'm worked around it other ways, but was told recently about luseradd. I wonder if there would be interest in adding a force=yes to the user module such that is uses luseradd instead of useradd. ",1,force user creation to be local sometimes ldap nis user names conflict with names we want to have local like mysql postgres i m worked around it other ways but was told recently about luseradd i wonder if there would be interest in adding a force yes to the user module such that is uses luseradd instead of useradd ,1 1860,6577413136.0,IssuesEvent,2017-09-12 00:44:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt locale override changes behavior of postgresql install,affects_1.9 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 1.9.5 ``` (looking at the commits, I think this is also a problem with the current development version) ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Commit 8d6a3b166c87f1182a60f3c1a2f775d05a9666d9 changed the env local overrides before invoking apt from just `LANG = 'C'` to `LANG='C', LC_ALL = 'C', LC_MESSAGES = 'C', LC_CTYPE = 'C',`. This changes the locale with which postgresql initializes the template and system databases (to C, from e.g. en_US.UTF-8). ##### STEPS TO REPRODUCE Install postgresql-9.1 (only tested with that version) with ansible 1.9.5 using the apt module and a system locale such as `en_US.UTF-8`. ##### EXPECTED RESULTS ``` $ sudo -u postgres psql -U postgres --list List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres (3 rows) ``` ##### ACTUAL RESULTS ``` $ sudo -u postgres psql -U postgres --list List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+-----------+---------+-------+----------------------- postgres | postgres | SQL_ASCII | C | C | template0 | postgres | SQL_ASCII | C | C | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | SQL_ASCII | C | C | =c/postgres + | | | | | postgres=CTc/postgres (3 rows) ``` ",True,"apt locale override changes behavior of postgresql install - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 1.9.5 ``` (looking at the commits, I think this is also a problem with the current development version) ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Commit 8d6a3b166c87f1182a60f3c1a2f775d05a9666d9 changed the env local overrides before invoking apt from just `LANG = 'C'` to `LANG='C', LC_ALL = 'C', LC_MESSAGES = 'C', LC_CTYPE = 'C',`. This changes the locale with which postgresql initializes the template and system databases (to C, from e.g. en_US.UTF-8). ##### STEPS TO REPRODUCE Install postgresql-9.1 (only tested with that version) with ansible 1.9.5 using the apt module and a system locale such as `en_US.UTF-8`. ##### EXPECTED RESULTS ``` $ sudo -u postgres psql -U postgres --list List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres (3 rows) ``` ##### ACTUAL RESULTS ``` $ sudo -u postgres psql -U postgres --list List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+-----------+---------+-------+----------------------- postgres | postgres | SQL_ASCII | C | C | template0 | postgres | SQL_ASCII | C | C | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | SQL_ASCII | C | C | =c/postgres + | | | | | postgres=CTc/postgres (3 rows) ``` ",1,apt locale override changes behavior of postgresql install issue type bug report component name apt ansible version ansible looking at the commits i think this is also a problem with the current development version configuration os environment n a summary commit changed the env local overrides before invoking apt from just lang c to lang c lc all c lc messages c lc ctype c this changes the locale with which postgresql initializes the template and system databases to c from e g en us utf steps to reproduce install postgresql only tested with that version with ansible using the apt module and a system locale such as en us utf expected results sudo u postgres psql u postgres list list of databases name owner encoding collate ctype access privileges postgres postgres en us utf en us utf postgres en us utf en us utf c postgres postgres ctc postgres postgres en us utf en us utf c postgres postgres ctc postgres rows actual results sudo u postgres psql u postgres list list of databases name owner encoding collate ctype access privileges postgres postgres sql ascii c c postgres sql ascii c c c postgres postgres ctc postgres postgres sql ascii c c c postgres postgres ctc postgres rows ,1 1064,4889233762.0,IssuesEvent,2016-11-18 09:31:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,include_role doesn't work with with_items and multi host ,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Master: Ubuntu 16.04.2 Managed: Rhel 6.6 ##### SUMMARY include_role doesn't work with 'with_items' and multi host vars ##### STEPS TO REPRODUCE Playbook ``` - hosts: ref gather_facts: False tasks: - debug: var=""item"" with_items: ""{{ test_var }}"" - include_role: name: ""role_test"" vars: r_var: ""{{ item }}"" with_items: ""{{ test_var }}"" ``` roles/role_test/tasks/main.yml ``` --- - debug: var=""r_var"" ``` hosts: ``` [test] host1 host2 ``` host_vars/host1/main.yml ``` --- test_var: - ""host1_val1"" - ""host1_val2"" ``` host_vars/host2/main.yml ``` --- test_var: - ""host2_val1"" - ""host2_val2"" ``` ##### EXPECTED RESULTS ``` PLAY [test] ********************************************************************* TASK [debug] ******************************************************************* ok: [host1] => (item=host1_val1) => { ""item"": ""host1_val1"" } ok: [host1] => (item=host1_val2) => { ""item"": ""host1_val2"" } ok: [host2] => (item=host2_val1) => { ""item"": ""host2_val1"" } ok: [host2] => (item=host2_val2) => { ""item"": ""host2_val2"" } TASK [include_role] ************************************************************ TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val1"" } ok: [host2] => { ""r_var"": ""host2_val1"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val2"" } ok: [host2] => { ""r_var"": ""host2_val2"" } PLAY RECAP ********************************************************************* host1 : ok=3 changed=0 unreachable=0 failed=0 host2 : ok=3 changed=0 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` PLAY [test] ********************************************************************* TASK [debug] ******************************************************************* ok: [host1] => (item=host1_val1) => { ""item"": ""host1_val1"" } ok: [host1] => (item=host1_val2) => { ""item"": ""host1_val2"" } ok: [host2] => (item=host2_val1) => { ""item"": ""host2_val1"" } ok: [host2] => (item=host2_val2) => { ""item"": ""host2_val2"" } TASK [include_role] ************************************************************ TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host2_val1"" } ok: [host2] => { ""r_var"": ""host2_val1"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host2_val2"" } ok: [host2] => { ""r_var"": ""host2_val2"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val1"" } ok: [host2] => { ""r_var"": ""host1_val1"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val2"" } ok: [host12] => { ""r_var"": ""host1_val2"" } PLAY RECAP ********************************************************************* host1 : ok=5 changed=0 unreachable=0 failed=0 host2 : ok=5 changed=0 unreachable=0 failed=0 ``` If test_var is an empty list for host2, play stops in error : ERROR! Unexpected Exception: 'results' ",True,"include_role doesn't work with with_items and multi host - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Master: Ubuntu 16.04.2 Managed: Rhel 6.6 ##### SUMMARY include_role doesn't work with 'with_items' and multi host vars ##### STEPS TO REPRODUCE Playbook ``` - hosts: ref gather_facts: False tasks: - debug: var=""item"" with_items: ""{{ test_var }}"" - include_role: name: ""role_test"" vars: r_var: ""{{ item }}"" with_items: ""{{ test_var }}"" ``` roles/role_test/tasks/main.yml ``` --- - debug: var=""r_var"" ``` hosts: ``` [test] host1 host2 ``` host_vars/host1/main.yml ``` --- test_var: - ""host1_val1"" - ""host1_val2"" ``` host_vars/host2/main.yml ``` --- test_var: - ""host2_val1"" - ""host2_val2"" ``` ##### EXPECTED RESULTS ``` PLAY [test] ********************************************************************* TASK [debug] ******************************************************************* ok: [host1] => (item=host1_val1) => { ""item"": ""host1_val1"" } ok: [host1] => (item=host1_val2) => { ""item"": ""host1_val2"" } ok: [host2] => (item=host2_val1) => { ""item"": ""host2_val1"" } ok: [host2] => (item=host2_val2) => { ""item"": ""host2_val2"" } TASK [include_role] ************************************************************ TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val1"" } ok: [host2] => { ""r_var"": ""host2_val1"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val2"" } ok: [host2] => { ""r_var"": ""host2_val2"" } PLAY RECAP ********************************************************************* host1 : ok=3 changed=0 unreachable=0 failed=0 host2 : ok=3 changed=0 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` PLAY [test] ********************************************************************* TASK [debug] ******************************************************************* ok: [host1] => (item=host1_val1) => { ""item"": ""host1_val1"" } ok: [host1] => (item=host1_val2) => { ""item"": ""host1_val2"" } ok: [host2] => (item=host2_val1) => { ""item"": ""host2_val1"" } ok: [host2] => (item=host2_val2) => { ""item"": ""host2_val2"" } TASK [include_role] ************************************************************ TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host2_val1"" } ok: [host2] => { ""r_var"": ""host2_val1"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host2_val2"" } ok: [host2] => { ""r_var"": ""host2_val2"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val1"" } ok: [host2] => { ""r_var"": ""host1_val1"" } TASK [role_test : debug] ******************************************************* ok: [host1] => { ""r_var"": ""host1_val2"" } ok: [host12] => { ""r_var"": ""host1_val2"" } PLAY RECAP ********************************************************************* host1 : ok=5 changed=0 unreachable=0 failed=0 host2 : ok=5 changed=0 unreachable=0 failed=0 ``` If test_var is an empty list for host2, play stops in error : ERROR! Unexpected Exception: 'results' ",1,include role doesn t work with with items and multi host issue type bug report component name include role ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration n a os environment master ubuntu managed rhel summary include role doesn t work with with items and multi host vars steps to reproduce playbook hosts ref gather facts false tasks debug var item with items test var include role name role test vars r var item with items test var roles role test tasks main yml debug var r var hosts host vars main yml test var host vars main yml test var expected results play task ok item item ok item item ok item item ok item item task task ok r var ok r var task ok r var ok r var play recap ok changed unreachable failed ok changed unreachable failed actual results play task ok item item ok item item ok item item ok item item task task ok r var ok r var task ok r var ok r var task ok r var ok r var task ok r var ok r var play recap ok changed unreachable failed ok changed unreachable failed if test var is an empty list for play stops in error error unexpected exception results ,1 1729,6574836703.0,IssuesEvent,2017-09-11 14:14:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,group_by fails on integer values ,affects_2.0 bug_report in progress waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY if any key has numeric value, following is thrown: An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py"", line 114, in run self._shared_loader_obj, File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 401, in _execute result = self._handler.run(task_vars=variables) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py"", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ##### STEPS TO REPRODUCE inventory: ~~~ host1 ansible_ssh_host=127.0.0.1 groupkey=1 ~~~ ~~~ - hosts: all name: testing groups tasks: - group_by: key={{ groupkey }} - hosts: 1 tasks: - ping: ~~~ ##### EXPECTED RESULTS hosts in group ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py"", line 114, in run self._shared_loader_obj, File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 401, in _execute result = self._handler.run(task_vars=variables) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py"", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ``` ",True,"group_by fails on integer values - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY if any key has numeric value, following is thrown: An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py"", line 114, in run self._shared_loader_obj, File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 401, in _execute result = self._handler.run(task_vars=variables) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py"", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ##### STEPS TO REPRODUCE inventory: ~~~ host1 ansible_ssh_host=127.0.0.1 groupkey=1 ~~~ ~~~ - hosts: all name: testing groups tasks: - group_by: key={{ groupkey }} - hosts: 1 tasks: - ping: ~~~ ##### EXPECTED RESULTS hosts in group ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py"", line 114, in run self._shared_loader_obj, File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 401, in _execute result = self._handler.run(task_vars=variables) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py"", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ``` ",1,group by fails on integer values issue type bug report component name group by ansible version ansible configuration vanilla os environment n a summary if any key has numeric value following is thrown an exception occurred during task execution the full traceback is traceback most recent call last file usr lib dist packages ansible executor process worker py line in run self shared loader obj file usr lib dist packages ansible executor task executor py line in run res self execute file usr lib dist packages ansible executor task executor py line in execute result self handler run task vars variables file usr lib dist packages ansible plugins action group by py line in run group name group name replace attributeerror int object has no attribute replace steps to reproduce inventory ansible ssh host groupkey hosts all name testing groups tasks group by key groupkey hosts tasks ping expected results hosts in group actual results an exception occurred during task execution the full traceback is traceback most recent call last file usr lib dist packages ansible executor process worker py line in run self shared loader obj file usr lib dist packages ansible executor task executor py line in run res self execute file usr lib dist packages ansible executor task executor py line in execute result self handler run task vars variables file usr lib dist packages ansible plugins action group by py line in run group name group name replace attributeerror int object has no attribute replace ,1 1663,6574059391.0,IssuesEvent,2017-09-11 11:17:55,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,synchronize become_user not honored when become_method: su is used,affects_2.3 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME synchronize ##### ANSIBLE VERSION ``` ansible 2.3.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 6.7 (Santiago) ##### SUMMARY become_method: su should allow the remote execution to occur as the become_user specified. However, that is not occurring. ##### STEPS TO REPRODUCE Specify become and synchronize parameters as below. Execute as follows and provide pwd for become_user ansible-playbook dir1_sync.yml -i ansible/hosts --ask-become --diff ``` become: true become_method: su become_user: ruser tasks: - name: ""Verify dir1 sync"" synchronize: src: ""/home/luser/dir1"" dest: ""/apps/dir1"" ``` ##### EXPECTED RESULTS I expect the remote commands to run as the become_user specified. ##### ACTUAL RESULTS The remote command did not run as the become_user. If I change the dest: path to be /tmp, the command executes but the resulting directory on the remote machine is owned by the ssh user. This issue is similar to https://github.com/ansible/ansible-modules-core/issues/4508 but the workaround doesn't work since using become_method: su. ``` fatal: [10.193.239.53]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no' --out-format='<>%i %n%L' \""/home/luser/dir1"" \""10.193.239.53:/apps\"""", ""failed"": true, ""msg"": ""rsync: mkdir \""/apps\"" failed: Permission denied (13)\nrsync error: error in file IO (code 11) at main.c(576) [receiver=3.0.6]\nrsync: connection unexpectedly closed (9 bytes received so far) [sender]\nrsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]\n"", ""rc"": 12} ``` ",True,"synchronize become_user not honored when become_method: su is used - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME synchronize ##### ANSIBLE VERSION ``` ansible 2.3.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 6.7 (Santiago) ##### SUMMARY become_method: su should allow the remote execution to occur as the become_user specified. However, that is not occurring. ##### STEPS TO REPRODUCE Specify become and synchronize parameters as below. Execute as follows and provide pwd for become_user ansible-playbook dir1_sync.yml -i ansible/hosts --ask-become --diff ``` become: true become_method: su become_user: ruser tasks: - name: ""Verify dir1 sync"" synchronize: src: ""/home/luser/dir1"" dest: ""/apps/dir1"" ``` ##### EXPECTED RESULTS I expect the remote commands to run as the become_user specified. ##### ACTUAL RESULTS The remote command did not run as the become_user. If I change the dest: path to be /tmp, the command executes but the resulting directory on the remote machine is owned by the ssh user. This issue is similar to https://github.com/ansible/ansible-modules-core/issues/4508 but the workaround doesn't work since using become_method: su. ``` fatal: [10.193.239.53]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no' --out-format='<>%i %n%L' \""/home/luser/dir1"" \""10.193.239.53:/apps\"""", ""failed"": true, ""msg"": ""rsync: mkdir \""/apps\"" failed: Permission denied (13)\nrsync error: error in file IO (code 11) at main.c(576) [receiver=3.0.6]\nrsync: connection unexpectedly closed (9 bytes received so far) [sender]\nrsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]\n"", ""rc"": 12} ``` ",1,synchronize become user not honored when become method su is used issue type bug report component name synchronize ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific red hat enterprise linux server release santiago summary become method su should allow the remote execution to occur as the become user specified however that is not occurring steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used specify become and synchronize parameters as below execute as follows and provide pwd for become user ansible playbook sync yml i ansible hosts ask become diff become true become method su become user ruser tasks name verify sync synchronize src home luser dest apps expected results i expect the remote commands to run as the become user specified actual results the remote command did not run as the become user if i change the dest path to be tmp the command executes but the resulting directory on the remote machine is owned by the ssh user this issue is similar to but the workaround doesn t work since using become method su fatal failed changed false cmd usr bin rsync delay updates f compress archive rsh ssh s none o stricthostkeychecking no out format i n l home luser apps failed true msg rsync mkdir apps failed permission denied nrsync error error in file io code at main c nrsync connection unexpectedly closed bytes received so far nrsync error error in rsync protocol data stream code at io c n rc ,1 1879,6577510302.0,IssuesEvent,2017-09-12 01:25:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 module does not warn about invalid dictionary entries for volumes,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` 2.0.1.0 ``` ##### CONFIGURATION NA ##### OS / ENVIRONMENT Ubuntu / Ubuntu ##### SUMMARY When passing invalid or unused values to the ec2 module's volumes node, they are just ignored and no warning is produced. ##### STEPS TO REPRODUCE Run the example task. The volumes[0][tags] attribute is not used (that feature doesn't seem to exist) but no warning is emitted. ``` - name: Create ec2 instance local_action: module: ec2 profile: example_profile key_name: ""{{ ec2_keypair }}"" image: ""{{ ec2_ami }}"" instance_type: ""{{ ec2_instance_type }}"" group: ""{{ search_security_group_name }}"" region: ""{{ aws_region }}"" volumes: - device_name: /dev/sda1 delete_on_termination: true volume_size: 60 device_type: gp2 tags: tagName: tagValue wait: yes ``` ##### EXPECTED RESULTS An error or warning, e.g. ""ec2 volumes do not support ""tags"""" ##### ACTUAL RESULTS tags is simply ignored ",True,"ec2 module does not warn about invalid dictionary entries for volumes - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` 2.0.1.0 ``` ##### CONFIGURATION NA ##### OS / ENVIRONMENT Ubuntu / Ubuntu ##### SUMMARY When passing invalid or unused values to the ec2 module's volumes node, they are just ignored and no warning is produced. ##### STEPS TO REPRODUCE Run the example task. The volumes[0][tags] attribute is not used (that feature doesn't seem to exist) but no warning is emitted. ``` - name: Create ec2 instance local_action: module: ec2 profile: example_profile key_name: ""{{ ec2_keypair }}"" image: ""{{ ec2_ami }}"" instance_type: ""{{ ec2_instance_type }}"" group: ""{{ search_security_group_name }}"" region: ""{{ aws_region }}"" volumes: - device_name: /dev/sda1 delete_on_termination: true volume_size: 60 device_type: gp2 tags: tagName: tagValue wait: yes ``` ##### EXPECTED RESULTS An error or warning, e.g. ""ec2 volumes do not support ""tags"""" ##### ACTUAL RESULTS tags is simply ignored ",1, module does not warn about invalid dictionary entries for volumes issue type bug report component name module ansible version configuration na os environment ubuntu ubuntu summary when passing invalid or unused values to the module s volumes node they are just ignored and no warning is produced steps to reproduce run the example task the volumes attribute is not used that feature doesn t seem to exist but no warning is emitted name create instance local action module profile example profile key name keypair image ami instance type instance type group search security group name region aws region volumes device name dev delete on termination true volume size device type tags tagname tagvalue wait yes expected results an error or warning e g volumes do not support tags actual results tags is simply ignored ,1 1804,6575933684.0,IssuesEvent,2017-09-11 17:53:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mount module no swap support,affects_2.0 feature_idea waiting_on_maintainer,"Come back to me asap, please. Were prepared for impl. if OK. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mount module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ",True,"mount module no swap support - Come back to me asap, please. Were prepared for impl. if OK. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mount module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ",1,mount module no swap support come back to me asap please were prepared for impl if ok issue type feature idea component name mount module ansible version ansible ,1 1829,6577351751.0,IssuesEvent,2017-09-12 00:18:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,redhat_subscription attaches all pools of the same name,affects_1.9 bug_report feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME redhat_subscription ##### ANSIBLE VERSION ``` [cloud-user@node8 ~]$ ansible --version ansible 1.9.4 configured module search path = None ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY Based on the current implementation, you can specify pool like: pool='^OpenShift Enterprise$'. This will return all of the available subscriptions that match and in turn grab the pool_id and attach them. The issue is that if you have multiple unique subscriptions of the exact same product (due to different dates), every pool that matches will get attached no matter what. If you had 5 unique 'OpenShift Enterprise' subscriptions, you would be oversubscribing by 4 every time you provisioned a server. There should be a way to define how many pools and or how many subscriptions you want to actually consume through the registration. Additionally, given the regex is only used to obtain the pool_id, it would also be helpful if you could simply specify the pool_id upfront and avoid the RegEx all together. ##### STEPS TO REPRODUCE Use 'redhat_subscription' to subscribe and attach to a pool where there are multiple pools with the exact same name. ##### EXPECTED RESULTS Should only consume a single subscription. ##### ACTUAL RESULTS Consumes N subscriptions where N is the number of subscriptions with the same name. ",True,"redhat_subscription attaches all pools of the same name - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME redhat_subscription ##### ANSIBLE VERSION ``` [cloud-user@node8 ~]$ ansible --version ansible 1.9.4 configured module search path = None ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY Based on the current implementation, you can specify pool like: pool='^OpenShift Enterprise$'. This will return all of the available subscriptions that match and in turn grab the pool_id and attach them. The issue is that if you have multiple unique subscriptions of the exact same product (due to different dates), every pool that matches will get attached no matter what. If you had 5 unique 'OpenShift Enterprise' subscriptions, you would be oversubscribing by 4 every time you provisioned a server. There should be a way to define how many pools and or how many subscriptions you want to actually consume through the registration. Additionally, given the regex is only used to obtain the pool_id, it would also be helpful if you could simply specify the pool_id upfront and avoid the RegEx all together. ##### STEPS TO REPRODUCE Use 'redhat_subscription' to subscribe and attach to a pool where there are multiple pools with the exact same name. ##### EXPECTED RESULTS Should only consume a single subscription. ##### ACTUAL RESULTS Consumes N subscriptions where N is the number of subscriptions with the same name. ",1,redhat subscription attaches all pools of the same name issue type bug report component name redhat subscription ansible version ansible version ansible configured module search path none configuration none os environment n a summary based on the current implementation you can specify pool like pool openshift enterprise this will return all of the available subscriptions that match and in turn grab the pool id and attach them the issue is that if you have multiple unique subscriptions of the exact same product due to different dates every pool that matches will get attached no matter what if you had unique openshift enterprise subscriptions you would be oversubscribing by every time you provisioned a server there should be a way to define how many pools and or how many subscriptions you want to actually consume through the registration additionally given the regex is only used to obtain the pool id it would also be helpful if you could simply specify the pool id upfront and avoid the regex all together steps to reproduce use redhat subscription to subscribe and attach to a pool where there are multiple pools with the exact same name expected results should only consume a single subscription actual results consumes n subscriptions where n is the number of subscriptions with the same name ,1 1896,6577539092.0,IssuesEvent,2017-09-12 01:37:11,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible-modules-core/cloud/amazon/ec2_vpc.py tag issue - feature request,affects_2.0 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: - ec2_vpc.py ##### Ansible Version: ansible 2.0.0.2 config file = /home/naslanidis/code/ag-vpc-management/ansible.cfg configured module search path = /usr/share/ansible/modules ##### Ansible Configuration: N/A ##### Environment: Fedora 22 x64 ##### Summary: When creating VPC's using this module, if any tag is changed or added a completely new VPC is created rather than the tags being added or updated. Looking at the documentation, all tags are used along with the cidr range to uniquely identify an existing VPC. In my opinion this should just be a NAME tag and other tags should be able to be added, changed and removed for an existing VPC ##### Steps To Reproduce: 1. Create a VPC using this module with a number of tags in place 2. Add a new tag (e.g. cost code), and rerun the script 3. A new duplicate VPC is created rather than adding the cost code tag to the existing VPC ``` - name: test out vpc module 1st run though ec2_vpc: state: present region: ap-southeast-2 profile: non_prod cidr_block: 10.33.0.0/16 resource_tags: Name: AAG-VPC-33 Owner: ABC register: created_vpc ``` ``` - name: test out vpc module 2nd run though ec2_vpc: state: present region: ap-southeast-2 profile: non_prod cidr_block: 10.33.0.0/16 resource_tags: Name: AAG-VPC-33 Owner: ABC Cost_code: 3 register: created_vpc ``` ##### Expected Results: The existing VPC should be updated with the new Cost_code tag. ##### Actual Results: A new VPC is created that duplicates the first VPC. ",True,"ansible-modules-core/cloud/amazon/ec2_vpc.py tag issue - feature request - ##### Issue Type: - Feature Idea ##### Plugin Name: - ec2_vpc.py ##### Ansible Version: ansible 2.0.0.2 config file = /home/naslanidis/code/ag-vpc-management/ansible.cfg configured module search path = /usr/share/ansible/modules ##### Ansible Configuration: N/A ##### Environment: Fedora 22 x64 ##### Summary: When creating VPC's using this module, if any tag is changed or added a completely new VPC is created rather than the tags being added or updated. Looking at the documentation, all tags are used along with the cidr range to uniquely identify an existing VPC. In my opinion this should just be a NAME tag and other tags should be able to be added, changed and removed for an existing VPC ##### Steps To Reproduce: 1. Create a VPC using this module with a number of tags in place 2. Add a new tag (e.g. cost code), and rerun the script 3. A new duplicate VPC is created rather than adding the cost code tag to the existing VPC ``` - name: test out vpc module 1st run though ec2_vpc: state: present region: ap-southeast-2 profile: non_prod cidr_block: 10.33.0.0/16 resource_tags: Name: AAG-VPC-33 Owner: ABC register: created_vpc ``` ``` - name: test out vpc module 2nd run though ec2_vpc: state: present region: ap-southeast-2 profile: non_prod cidr_block: 10.33.0.0/16 resource_tags: Name: AAG-VPC-33 Owner: ABC Cost_code: 3 register: created_vpc ``` ##### Expected Results: The existing VPC should be updated with the new Cost_code tag. ##### Actual Results: A new VPC is created that duplicates the first VPC. ",1,ansible modules core cloud amazon vpc py tag issue feature request issue type feature idea plugin name vpc py ansible version ansible config file home naslanidis code ag vpc management ansible cfg configured module search path usr share ansible modules ansible configuration n a environment fedora summary when creating vpc s using this module if any tag is changed or added a completely new vpc is created rather than the tags being added or updated looking at the documentation all tags are used along with the cidr range to uniquely identify an existing vpc in my opinion this should just be a name tag and other tags should be able to be added changed and removed for an existing vpc steps to reproduce create a vpc using this module with a number of tags in place add a new tag e g cost code and rerun the script a new duplicate vpc is created rather than adding the cost code tag to the existing vpc name test out vpc module run though vpc state present region ap southeast profile non prod cidr block resource tags name aag vpc owner abc register created vpc name test out vpc module run though vpc state present region ap southeast profile non prod cidr block resource tags name aag vpc owner abc cost code register created vpc expected results the existing vpc should be updated with the new cost code tag actual results a new vpc is created that duplicates the first vpc ,1 1087,4934496947.0,IssuesEvent,2016-11-28 19:15:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,The module service will fail in check mode if the target service is not yet installed on server.,affects_2.2 bug_report in progress waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible module service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no settings in ansible.cfg ##### OS / ENVIRONMENT Debian/wheezy ##### SUMMARY The module service will fail in check mode if the target service is not yet installed on server. For instance my playbook installs apache2 and, later restart apache2. Running ansible-playbook --check on a server without apache2 will fail. This may be a duplicate but I did not find any issue on this topic. ##### STEPS TO REPRODUCE This Dockerfile will reproduce this issue: https://github.com/pgrange/ansible_service_check_mode_issue Running this playbook in check mode on a brand new server will reproduce this issue: ``` - hosts: all tasks: - apt: name=apache2 - name: no need but I would like to restart apache service: name=apache2 state=restarted ``` ##### EXPECTED RESULTS ansible-playbook --check should not fail if a service to restart is not already installed on server. Why not raise a warning that we are trying to restart an unknown service but that's it. ##### ACTUAL RESULTS What actually happens is that running ansible-playbook in check mode fails: ``` PLAYBOOK: apache.yml *********************************************************** 1 plays in /tmp/ansible/apache.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `"" && echo ansible-tmp-1479828044.78-237878029370849=""` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `"" ) && sleep 0' PUT /tmp/tmpbW6GVG TO /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/ /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [apt] ********************************************************************* task path: /tmp/ansible/apache.yml:4 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `"" && echo ansible-tmp-1479828045.04-80599335973148=""` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `"" ) && sleep 0' PUT /tmp/tmpBgPUnm TO /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/ /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/"" > /dev/null 2>&1 && sleep 0' changed: [localhost] => { ""cache_update_time"": 1479825681, ""cache_updated"": false, ""changed"": true, ""diff"": {}, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 0, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""apache2"", ""only_upgrade"": false, ""package"": [ ""apache2"" ], ""purge"": false, ""state"": ""present"", ""update_cache"": false, ""upgrade"": null }, ""module_name"": ""apt"" }, ""stderr"": """", ""stdout"": ""Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1\n libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3\n libprocps0 procps psmisc ssl-cert\nSuggested packages:\n www-browser apache2-doc apache2-suexec apache2-suexec-custom\n openssl-blacklist\nThe following NEW packages will be installed:\n apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common\n libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2\n libpcre3 libprocps0 procps psmisc ssl-cert\n0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.\nInst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nInst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nInst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nInst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nInst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\nConf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nConf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nConf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nConf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nConf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\n"", ""stdout_lines"": [ ""Reading package lists..."", ""Building dependency tree..."", ""Reading state information..."", ""The following extra packages will be installed:"", "" apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1"", "" libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3"", "" libprocps0 procps psmisc ssl-cert"", ""Suggested packages:"", "" www-browser apache2-doc apache2-suexec apache2-suexec-custom"", "" openssl-blacklist"", ""The following NEW packages will be installed:"", "" apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common"", "" libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2"", "" libpcre3 libprocps0 procps psmisc ssl-cert"", ""0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded."", ""Inst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Inst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])"", ""Inst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])"", ""Inst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Inst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])"", ""Inst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Inst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Inst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Inst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])"", ""Inst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])"", ""Conf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Conf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])"", ""Conf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])"", ""Conf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Conf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])"", ""Conf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Conf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Conf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Conf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])"", ""Conf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])"" ] } TASK [no need but I would like to restart apache] ****************************** task path: /tmp/ansible/apache.yml:5 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/service.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `"" && echo ansible-tmp-1479828046.91-253442262595803=""` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `"" ) && sleep 0' PUT /tmp/tmpu0znF1 TO /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/ /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""arguments"": """", ""enabled"": null, ""name"": ""apache2"", ""pattern"": null, ""runlevel"": ""default"", ""sleep"": null, ""state"": ""restarted"" } }, ""msg"": ""no service or tool found for: apache2"" } to retry, use: --limit @/tmp/ansible/apache.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=1 The command '/bin/sh -c ansible-playbook -vvv -i localhost, -c local /tmp/ansible/apache.yml --check' returned a non-zero code: 2 ``` ",True,"The module service will fail in check mode if the target service is not yet installed on server. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible module service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no settings in ansible.cfg ##### OS / ENVIRONMENT Debian/wheezy ##### SUMMARY The module service will fail in check mode if the target service is not yet installed on server. For instance my playbook installs apache2 and, later restart apache2. Running ansible-playbook --check on a server without apache2 will fail. This may be a duplicate but I did not find any issue on this topic. ##### STEPS TO REPRODUCE This Dockerfile will reproduce this issue: https://github.com/pgrange/ansible_service_check_mode_issue Running this playbook in check mode on a brand new server will reproduce this issue: ``` - hosts: all tasks: - apt: name=apache2 - name: no need but I would like to restart apache service: name=apache2 state=restarted ``` ##### EXPECTED RESULTS ansible-playbook --check should not fail if a service to restart is not already installed on server. Why not raise a warning that we are trying to restart an unknown service but that's it. ##### ACTUAL RESULTS What actually happens is that running ansible-playbook in check mode fails: ``` PLAYBOOK: apache.yml *********************************************************** 1 plays in /tmp/ansible/apache.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `"" && echo ansible-tmp-1479828044.78-237878029370849=""` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `"" ) && sleep 0' PUT /tmp/tmpbW6GVG TO /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/ /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [apt] ********************************************************************* task path: /tmp/ansible/apache.yml:4 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `"" && echo ansible-tmp-1479828045.04-80599335973148=""` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `"" ) && sleep 0' PUT /tmp/tmpBgPUnm TO /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/ /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/"" > /dev/null 2>&1 && sleep 0' changed: [localhost] => { ""cache_update_time"": 1479825681, ""cache_updated"": false, ""changed"": true, ""diff"": {}, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 0, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""apache2"", ""only_upgrade"": false, ""package"": [ ""apache2"" ], ""purge"": false, ""state"": ""present"", ""update_cache"": false, ""upgrade"": null }, ""module_name"": ""apt"" }, ""stderr"": """", ""stdout"": ""Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1\n libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3\n libprocps0 procps psmisc ssl-cert\nSuggested packages:\n www-browser apache2-doc apache2-suexec apache2-suexec-custom\n openssl-blacklist\nThe following NEW packages will be installed:\n apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common\n libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2\n libpcre3 libprocps0 procps psmisc ssl-cert\n0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.\nInst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nInst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nInst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nInst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nInst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\nConf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nConf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nConf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nConf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nConf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\n"", ""stdout_lines"": [ ""Reading package lists..."", ""Building dependency tree..."", ""Reading state information..."", ""The following extra packages will be installed:"", "" apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1"", "" libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3"", "" libprocps0 procps psmisc ssl-cert"", ""Suggested packages:"", "" www-browser apache2-doc apache2-suexec apache2-suexec-custom"", "" openssl-blacklist"", ""The following NEW packages will be installed:"", "" apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common"", "" libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2"", "" libpcre3 libprocps0 procps psmisc ssl-cert"", ""0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded."", ""Inst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Inst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])"", ""Inst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])"", ""Inst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Inst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])"", ""Inst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Inst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Inst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Inst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Inst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])"", ""Inst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])"", ""Conf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Conf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])"", ""Conf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])"", ""Conf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])"", ""Conf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])"", ""Conf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Conf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Conf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])"", ""Conf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])"", ""Conf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])"", ""Conf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])"" ] } TASK [no need but I would like to restart apache] ****************************** task path: /tmp/ansible/apache.yml:5 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/service.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `"" && echo ansible-tmp-1479828046.91-253442262595803=""` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `"" ) && sleep 0' PUT /tmp/tmpu0znF1 TO /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/ /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""arguments"": """", ""enabled"": null, ""name"": ""apache2"", ""pattern"": null, ""runlevel"": ""default"", ""sleep"": null, ""state"": ""restarted"" } }, ""msg"": ""no service or tool found for: apache2"" } to retry, use: --limit @/tmp/ansible/apache.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=1 The command '/bin/sh -c ansible-playbook -vvv -i localhost, -c local /tmp/ansible/apache.yml --check' returned a non-zero code: 2 ``` ",1,the module service will fail in check mode if the target service is not yet installed on server issue type bug report component name ansible module service ansible version ansible config file configured module search path default w o overrides configuration no settings in ansible cfg os environment debian wheezy summary the module service will fail in check mode if the target service is not yet installed on server for instance my playbook installs and later restart running ansible playbook check on a server without will fail this may be a duplicate but i did not find any issue on this topic steps to reproduce this dockerfile will reproduce this issue running this playbook in check mode on a brand new server will reproduce this issue hosts all tasks apt name name no need but i would like to restart apache service name state restarted expected results ansible playbook check should not fail if a service to restart is not already installed on server why not raise a warning that we are trying to restart an unknown service but that s it actual results what actually happens is that running ansible playbook in check mode fails playbook apache yml plays in tmp ansible apache yml play task using module file usr local lib dist packages ansible modules core system setup py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp setup py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp setup py sleep exec bin sh c usr bin python root ansible tmp ansible tmp setup py rm rf root ansible tmp ansible tmp dev null sleep ok task task path tmp ansible apache yml using module file usr local lib dist packages ansible modules core packaging os apt py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpbgpunm to root ansible tmp ansible tmp apt py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp apt py sleep exec bin sh c usr bin python root ansible tmp ansible tmp apt py rm rf root ansible tmp ansible tmp dev null sleep changed cache update time cache updated false changed true diff invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null name only upgrade false package purge false state present update cache false upgrade null module name apt stderr stdout reading package lists nbuilding dependency tree nreading state information nthe following extra packages will be installed n mpm worker utils bin common n dbd ldap n procps psmisc ssl cert nsuggested packages n www browser doc suexec suexec custom n openssl blacklist nthe following new packages will be installed n mpm worker utils bin common n dbd ldap n procps psmisc ssl cert upgraded newly installed to remove and not upgraded ninst debian oldstable ninst debian oldstable ninst debian oldstable ninst procps debian oldstable ninst debian oldstable ninst debian oldstable ninst dbd debian oldstable ninst ldap debian oldstable ninst bin debian security oldstable ninst utils debian security oldstable ninst common debian security oldstable ninst mpm worker debian security oldstable ninst debian security oldstable ninst psmisc debian oldstable ninst ssl cert debian oldstable nconf debian oldstable nconf debian oldstable nconf debian oldstable nconf procps debian oldstable nconf debian oldstable nconf debian oldstable nconf dbd debian oldstable nconf ldap debian oldstable nconf bin debian security oldstable nconf utils debian security oldstable nconf common debian security oldstable nconf mpm worker debian security oldstable nconf debian security oldstable nconf psmisc debian oldstable nconf ssl cert debian oldstable n stdout lines reading package lists building dependency tree reading state information the following extra packages will be installed mpm worker utils bin common dbd ldap procps psmisc ssl cert suggested packages www browser doc suexec suexec custom openssl blacklist the following new packages will be installed mpm worker utils bin common dbd ldap procps psmisc ssl cert upgraded newly installed to remove and not upgraded inst debian oldstable inst debian oldstable inst debian oldstable inst procps debian oldstable inst debian oldstable inst debian oldstable inst dbd debian oldstable inst ldap debian oldstable inst bin debian security oldstable inst utils debian security oldstable inst common debian security oldstable inst mpm worker debian security oldstable inst debian security oldstable inst psmisc debian oldstable inst ssl cert debian oldstable conf debian oldstable conf debian oldstable conf debian oldstable conf procps debian oldstable conf debian oldstable conf debian oldstable conf dbd debian oldstable conf ldap debian oldstable conf bin debian security oldstable conf utils debian security oldstable conf common debian security oldstable conf mpm worker debian security oldstable conf debian security oldstable conf psmisc debian oldstable conf ssl cert debian oldstable task task path tmp ansible apache yml using module file usr local lib dist packages ansible modules core system service py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp service py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp service py sleep exec bin sh c usr bin python root ansible tmp ansible tmp service py rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args arguments enabled null name pattern null runlevel default sleep null state restarted msg no service or tool found for to retry use limit tmp ansible apache retry play recap localhost ok changed unreachable failed the command bin sh c ansible playbook vvv i localhost c local tmp ansible apache yml check returned a non zero code ,1 1072,4890830236.0,IssuesEvent,2016-11-18 15:03:12,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,systemd fails on masking nonexistent service,affects_2.2 bug_report in progress waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME systemd ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Debian Jessie 64 GNU/Linux ##### SUMMARY Module systemd fails to mask nonexistent service. Shell command `systemctl mask hello` works fine with nonexistent services. It is useful to mask service before package installation to avoid autostart, e.g.: ``` systemctl mask redis-server apt-get install redis-server systemctl unmask redis-server ``` ##### STEPS TO REPRODUCE ``` ansible -m systemd -a 'name=hello masked=True' localhost -s ``` ##### EXPECTED RESULTS As in systemctl command: ``` $ systemctl mask hello Created symlink from /etc/systemd/system/hello.service to /dev/null. ``` ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Could not find the requested service \""'hello'\"": "" } ``` ",True,"systemd fails on masking nonexistent service - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME systemd ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Debian Jessie 64 GNU/Linux ##### SUMMARY Module systemd fails to mask nonexistent service. Shell command `systemctl mask hello` works fine with nonexistent services. It is useful to mask service before package installation to avoid autostart, e.g.: ``` systemctl mask redis-server apt-get install redis-server systemctl unmask redis-server ``` ##### STEPS TO REPRODUCE ``` ansible -m systemd -a 'name=hello masked=True' localhost -s ``` ##### EXPECTED RESULTS As in systemctl command: ``` $ systemctl mask hello Created symlink from /etc/systemd/system/hello.service to /dev/null. ``` ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Could not find the requested service \""'hello'\"": "" } ``` ",1,systemd fails on masking nonexistent service issue type bug report component name systemd ansible version ansible configuration default os environment debian jessie gnu linux summary module systemd fails to mask nonexistent service shell command systemctl mask hello works fine with nonexistent services it is useful to mask service before package installation to avoid autostart e g systemctl mask redis server apt get install redis server systemctl unmask redis server steps to reproduce ansible m systemd a name hello masked true localhost s expected results as in systemctl command systemctl mask hello created symlink from etc systemd system hello service to dev null actual results localhost failed changed false failed true msg could not find the requested service hello ,1 1054,4864099234.0,IssuesEvent,2016-11-14 17:04:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec_asg does not support creation of ASGs that specify a placement group,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ec_asg does not support creation of ASGs that specify a placement group, even though Boto does support it. ##### STEPS TO REPRODUCE ``` ec2_asg: ... placement_group: some_placement_group ``` ##### EXPECTED RESULTS Creation of ASG with specified placement group. ##### ACTUAL RESULTS ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""unsupported parameter for module: placement_group""} ``` ",True,"ec_asg does not support creation of ASGs that specify a placement group - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ec_asg does not support creation of ASGs that specify a placement group, even though Boto does support it. ##### STEPS TO REPRODUCE ``` ec2_asg: ... placement_group: some_placement_group ``` ##### EXPECTED RESULTS Creation of ASG with specified placement group. ##### ACTUAL RESULTS ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""unsupported parameter for module: placement_group""} ``` ",1,ec asg does not support creation of asgs that specify a placement group issue type bug report component name asg ansible version ansible configuration os environment summary ec asg does not support creation of asgs that specify a placement group even though boto does support it steps to reproduce asg placement group some placement group expected results creation of asg with specified placement group actual results fatal failed changed false failed true msg unsupported parameter for module placement group ,1 981,4746537643.0,IssuesEvent,2016-10-21 11:33:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,YAML support not working correctly,affects_2.2 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/cloud/amazon/cloudformation ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 29fda4be1e) last updated 2016/09/16 18:26:39 (GMT +000) lib/ansible/modules/core: (detached HEAD 2e1e3562b9) last updated 2016/09/16 18:26:41 (GMT +000) lib/ansible/modules/extras: (detached HEAD 9b5c64e240) last updated 2016/09/16 18:26:42 (GMT +000) config file = /opt/ansible_dev/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT Amazon Linux ##### SUMMARY When using the CloudFormation template in YAML format, the scripts are encountering an error. It appears that line 277 of the 'cloudformation.py' module attempts to parse the template as valid YAML with the standard yaml library and then convert that into JSON. This is completely unnecessary as the CloudFormation engine will throw back an error if there is some kind of formatting issue. In fact, attempting to convert the YAML file back to JSON will almost always break the format in some way. ##### WORKAROUND: Commenting out lines 276 and 277 allows the module to work fine using a YAML formatted template. ##### STEPS TO REPRODUCE ``` --- Sample playbook --- --- - hosts: localhost tasks: - name: Build EC2 windows instance cloudformation: stack_name: ""ansibleTestInstance"" state: ""present"" region: ""us-east-1"" disable_rollback: true template: ""files/generic_cf_template.yml"" template_format: ""yaml"" template_parameters: InstanceType: ""t2.medium"" KeyName: ""your-master-key"" RDPSecurityGroup: ""sg-12345678"" DestinationSubnet: ""subnet-12345678"" WindowsAMI: ""ami-ee7805f9"" InstanceIAMRoleName: ""test_iam_role"" tags: Name: ""ansible-test-instance"" --- Sample CloudFormation.yml file: --- AWSTemplateFormatVersion: '2010-09-09' Description: > This template creates Amazon EC2 Windows instance and related resources. You will be billed for the AWS resources used if you create a stack from this template. Outputs: EC2InstanceId: Description: The ID of the instance created. Value: !Ref WindowsServer Parameters: InstanceType: ConstraintDescription: Must be a valid EC2 instance type. Default: t2.medium Description: Amazon EC2 instance type Type: String KeyName: ConstraintDescription: must be the name of an existing EC2 KeyPair. Description: Name of an existing EC2 KeyPair Type: AWS::EC2::KeyPair::KeyName RDPSecurityGroup: Description: Select the security group that will allow RDP connections to this machine. Type: AWS::EC2::SecurityGroup::Id DestinationSubnet: Description: Select the subnet where to place the new instance. Type: AWS::EC2::Subnet::Id WindowsAMI: Description: Enter the id of the Windows AMI you wish to use. Default: ami-ee7805f9 Type: AWS::EC2::Image::Id InstanceIAMRoleName: Description: Enter the name of the IAM Role you wish to use for this instance. (NOTE this is just the role name, not an ARN) Type: String Resources: WindowsServer: Type: AWS::EC2::Instance Properties: ImageId: Ref: WindowsAMI InstanceType: Ref: InstanceType KeyName: Ref: KeyName SubnetId: Ref: DestinationSubnet SecurityGroupIds: - Ref: RDPSecurityGroup IamInstanceProfile: Ref: InstanceIAMRoleName ``` ",True,"YAML support not working correctly - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/cloud/amazon/cloudformation ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 29fda4be1e) last updated 2016/09/16 18:26:39 (GMT +000) lib/ansible/modules/core: (detached HEAD 2e1e3562b9) last updated 2016/09/16 18:26:41 (GMT +000) lib/ansible/modules/extras: (detached HEAD 9b5c64e240) last updated 2016/09/16 18:26:42 (GMT +000) config file = /opt/ansible_dev/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT Amazon Linux ##### SUMMARY When using the CloudFormation template in YAML format, the scripts are encountering an error. It appears that line 277 of the 'cloudformation.py' module attempts to parse the template as valid YAML with the standard yaml library and then convert that into JSON. This is completely unnecessary as the CloudFormation engine will throw back an error if there is some kind of formatting issue. In fact, attempting to convert the YAML file back to JSON will almost always break the format in some way. ##### WORKAROUND: Commenting out lines 276 and 277 allows the module to work fine using a YAML formatted template. ##### STEPS TO REPRODUCE ``` --- Sample playbook --- --- - hosts: localhost tasks: - name: Build EC2 windows instance cloudformation: stack_name: ""ansibleTestInstance"" state: ""present"" region: ""us-east-1"" disable_rollback: true template: ""files/generic_cf_template.yml"" template_format: ""yaml"" template_parameters: InstanceType: ""t2.medium"" KeyName: ""your-master-key"" RDPSecurityGroup: ""sg-12345678"" DestinationSubnet: ""subnet-12345678"" WindowsAMI: ""ami-ee7805f9"" InstanceIAMRoleName: ""test_iam_role"" tags: Name: ""ansible-test-instance"" --- Sample CloudFormation.yml file: --- AWSTemplateFormatVersion: '2010-09-09' Description: > This template creates Amazon EC2 Windows instance and related resources. You will be billed for the AWS resources used if you create a stack from this template. Outputs: EC2InstanceId: Description: The ID of the instance created. Value: !Ref WindowsServer Parameters: InstanceType: ConstraintDescription: Must be a valid EC2 instance type. Default: t2.medium Description: Amazon EC2 instance type Type: String KeyName: ConstraintDescription: must be the name of an existing EC2 KeyPair. Description: Name of an existing EC2 KeyPair Type: AWS::EC2::KeyPair::KeyName RDPSecurityGroup: Description: Select the security group that will allow RDP connections to this machine. Type: AWS::EC2::SecurityGroup::Id DestinationSubnet: Description: Select the subnet where to place the new instance. Type: AWS::EC2::Subnet::Id WindowsAMI: Description: Enter the id of the Windows AMI you wish to use. Default: ami-ee7805f9 Type: AWS::EC2::Image::Id InstanceIAMRoleName: Description: Enter the name of the IAM Role you wish to use for this instance. (NOTE this is just the role name, not an ARN) Type: String Resources: WindowsServer: Type: AWS::EC2::Instance Properties: ImageId: Ref: WindowsAMI InstanceType: Ref: InstanceType KeyName: Ref: KeyName SubnetId: Ref: DestinationSubnet SecurityGroupIds: - Ref: RDPSecurityGroup IamInstanceProfile: Ref: InstanceIAMRoleName ``` ",1,yaml support not working correctly issue type bug report component name ansible modules core cloud amazon cloudformation ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file opt ansible dev ansible cfg configured module search path default w o overrides configuration n a os environment amazon linux summary when using the cloudformation template in yaml format the scripts are encountering an error it appears that line of the cloudformation py module attempts to parse the template as valid yaml with the standard yaml library and then convert that into json this is completely unnecessary as the cloudformation engine will throw back an error if there is some kind of formatting issue in fact attempting to convert the yaml file back to json will almost always break the format in some way workaround commenting out lines and allows the module to work fine using a yaml formatted template steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used sample playbook hosts localhost tasks name build windows instance cloudformation stack name ansibletestinstance state present region us east disable rollback true template files generic cf template yml template format yaml template parameters instancetype medium keyname your master key rdpsecuritygroup sg destinationsubnet subnet windowsami ami instanceiamrolename test iam role tags name ansible test instance sample cloudformation yml file awstemplateformatversion description this template creates amazon windows instance and related resources you will be billed for the aws resources used if you create a stack from this template outputs description the id of the instance created value ref windowsserver parameters instancetype constraintdescription must be a valid instance type default medium description amazon instance type type string keyname constraintdescription must be the name of an existing keypair description name of an existing keypair type aws keypair keyname rdpsecuritygroup description select the security group that will allow rdp connections to this machine type aws securitygroup id destinationsubnet description select the subnet where to place the new instance type aws subnet id windowsami description enter the id of the windows ami you wish to use default ami type aws image id instanceiamrolename description enter the name of the iam role you wish to use for this instance note this is just the role name not an arn type string resources windowsserver type aws instance properties imageid ref windowsami instancetype ref instancetype keyname ref keyname subnetid ref destinationsubnet securitygroupids ref rdpsecuritygroup iaminstanceprofile ref instanceiamrolename ,1 1746,6574930271.0,IssuesEvent,2017-09-11 14:31:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,make hostname work on Virtuozzo / OpenVZ,affects_2.1 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME hostname ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION irrelevant ##### OS / ENVIRONMENT ``` ""ansible_distribution"": ""Virtuozzo"", ""ansible_distribution_major_version"": ""7"", ""ansible_distribution_release"": ""NA"", ""ansible_distribution_version"": ""7.2"", ""ansible_lsb"": { ""codename"": ""n/a"", ""description"": ""Virtuozzo Linux release 7.2"", ""id"": ""Virtuozzo"", ""major_release"": ""7"", ""release"": ""7.2"" }, ""ansible_os_family"": ""Virtuozzo"", ``` ##### SUMMARY Module hostname should work also on Virtuozzo distribution, which is a systemd based CentOS-Fork ##### STEPS TO REPRODUCE ``` - name: set hostname hostname: name=basic.hostname when: basic is defined ``` ##### EXPECTED RESULTS hostname should be set according to given variable ##### ACTUAL RESULTS ``` fatal: [xyz]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""name"": ""basic.hostname""}, ""module_name"": ""hostname""}, ""msg"": ""hostname module cannot be used on platform Linux (Virtuozzo linux)""} ``` ",True,"make hostname work on Virtuozzo / OpenVZ - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME hostname ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION irrelevant ##### OS / ENVIRONMENT ``` ""ansible_distribution"": ""Virtuozzo"", ""ansible_distribution_major_version"": ""7"", ""ansible_distribution_release"": ""NA"", ""ansible_distribution_version"": ""7.2"", ""ansible_lsb"": { ""codename"": ""n/a"", ""description"": ""Virtuozzo Linux release 7.2"", ""id"": ""Virtuozzo"", ""major_release"": ""7"", ""release"": ""7.2"" }, ""ansible_os_family"": ""Virtuozzo"", ``` ##### SUMMARY Module hostname should work also on Virtuozzo distribution, which is a systemd based CentOS-Fork ##### STEPS TO REPRODUCE ``` - name: set hostname hostname: name=basic.hostname when: basic is defined ``` ##### EXPECTED RESULTS hostname should be set according to given variable ##### ACTUAL RESULTS ``` fatal: [xyz]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""name"": ""basic.hostname""}, ""module_name"": ""hostname""}, ""msg"": ""hostname module cannot be used on platform Linux (Virtuozzo linux)""} ``` ",1,make hostname work on virtuozzo openvz issue type feature idea component name hostname ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables irrelevant os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible distribution virtuozzo ansible distribution major version ansible distribution release na ansible distribution version ansible lsb codename n a description virtuozzo linux release id virtuozzo major release release ansible os family virtuozzo summary module hostname should work also on virtuozzo distribution which is a systemd based centos fork steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name set hostname hostname name basic hostname when basic is defined expected results hostname should be set according to given variable actual results fatal failed changed false failed true invocation module args name basic hostname module name hostname msg hostname module cannot be used on platform linux virtuozzo linux ,1 1762,6575000000.0,IssuesEvent,2017-09-11 14:44:21,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Azure NSG Documenation missing some fields,affects_2.3 azure cloud docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME http://docs.ansible.com/ansible/azure_rm_securitygroup_module.html ##### SUMMARY The documention does not indicate some fields related to rules - source_address_prefix - Does this support tags as in the PORTAL? - direction - Assumed set is Inbound/Outbound - access - Assumed set isDeny/Allow ",True,"Azure NSG Documenation missing some fields - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME http://docs.ansible.com/ansible/azure_rm_securitygroup_module.html ##### SUMMARY The documention does not indicate some fields related to rules - source_address_prefix - Does this support tags as in the PORTAL? - direction - Assumed set is Inbound/Outbound - access - Assumed set isDeny/Allow ",1,azure nsg documenation missing some fields issue type documentation report component name summary the documention does not indicate some fields related to rules source address prefix does this support tags as in the portal direction assumed set is inbound outbound access assumed set isdeny allow ,1 745,4350929598.0,IssuesEvent,2016-07-31 15:23:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Apt module - the possibilty to know if a debian package is present or not ,feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible --version ansible 2.0.2.0 config file = configured module search path = Default w/o override ``` ##### CONFIGURATION No files and no env ##### OS / ENVIRONMENT ``` lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.4 (jessie) Release: 8.4 Codename: jessie ``` ##### SUMMARY Just for obtain an little feature : the possibilty to know if a debian package is present or not . After, if the condition is true, we can register the package version. Because, using the shell module is dirty :+1: ``` - name: test version shell: haproxy -v | awk '$0 ~ /HA-Proxy/ {print$3}' register: haproxyversion tags: - status - name: status of backends shell: echo ""show servers state"" | nc localhost 666 | grep -Ev ""^1|^#|^$"" | awk '{print""frontend:"""" ""$2"" """"backend:"""" ""$4"" """"ip:"""" ""$5"" """"status:"""" ""$6}' register: haproxyout when: haproxyversion.stdout.find('1.6') != -1 tags: - status ``` ##### STEPS TO REPRODUCE It's not a bug ##### EXPECTED RESULTS It's not a bug ##### ACTUAL RESULTS It's not a bug ",True,"Apt module - the possibilty to know if a debian package is present or not - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible --version ansible 2.0.2.0 config file = configured module search path = Default w/o override ``` ##### CONFIGURATION No files and no env ##### OS / ENVIRONMENT ``` lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.4 (jessie) Release: 8.4 Codename: jessie ``` ##### SUMMARY Just for obtain an little feature : the possibilty to know if a debian package is present or not . After, if the condition is true, we can register the package version. Because, using the shell module is dirty :+1: ``` - name: test version shell: haproxy -v | awk '$0 ~ /HA-Proxy/ {print$3}' register: haproxyversion tags: - status - name: status of backends shell: echo ""show servers state"" | nc localhost 666 | grep -Ev ""^1|^#|^$"" | awk '{print""frontend:"""" ""$2"" """"backend:"""" ""$4"" """"ip:"""" ""$5"" """"status:"""" ""$6}' register: haproxyout when: haproxyversion.stdout.find('1.6') != -1 tags: - status ``` ##### STEPS TO REPRODUCE It's not a bug ##### EXPECTED RESULTS It's not a bug ##### ACTUAL RESULTS It's not a bug ",1,apt module the possibilty to know if a debian package is present or not issue type feature idea component name apt ansible version ansible version ansible config file configured module search path default w o override configuration no files and no env os environment lsb release a no lsb modules are available distributor id debian description debian gnu linux jessie release codename jessie summary just for obtain an little feature the possibilty to know if a debian package is present or not after if the condition is true we can register the package version because using the shell module is dirty name test version shell haproxy v awk ha proxy print register haproxyversion tags status name status of backends shell echo show servers state nc localhost grep ev awk print frontend backend ip status register haproxyout when haproxyversion stdout find tags status steps to reproduce it s not a bug expected results it s not a bug actual results it s not a bug ,1 1086,4934170651.0,IssuesEvent,2016-11-28 18:18:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"copy: dest=filename fails with ""Destination directory does not exist""",affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ##### SUMMARY When I try to copy a file into the current working directory on the remote end, Ansible fails because it tries to check for the existence of the (empty string) directory. ##### STEPS TO REPRODUCE ``` $ ansible localhost -m copy -a ""src=/dev/null dest=file.txt"" ``` ##### EXPECTED RESULTS I expect an empty file `file.txt` to show up in my home directory ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""checksum"": ""da39a3ee5e6b4b0d3255bfef95601890afd80709"", ""failed"": true, ""msg"": ""Destination directory does not exist"" } ``` ",True,"copy: dest=filename fails with ""Destination directory does not exist"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ##### SUMMARY When I try to copy a file into the current working directory on the remote end, Ansible fails because it tries to check for the existence of the (empty string) directory. ##### STEPS TO REPRODUCE ``` $ ansible localhost -m copy -a ""src=/dev/null dest=file.txt"" ``` ##### EXPECTED RESULTS I expect an empty file `file.txt` to show up in my home directory ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""checksum"": ""da39a3ee5e6b4b0d3255bfef95601890afd80709"", ""failed"": true, ""msg"": ""Destination directory does not exist"" } ``` ",1,copy dest filename fails with destination directory does not exist issue type bug report component name copy ansible version ansible config file configured module search path default w o overrides configuration os environment linux summary when i try to copy a file into the current working directory on the remote end ansible fails because it tries to check for the existence of the empty string directory steps to reproduce ansible localhost m copy a src dev null dest file txt expected results i expect an empty file file txt to show up in my home directory actual results localhost failed changed false checksum failed true msg destination directory does not exist ,1 1673,6574094040.0,IssuesEvent,2017-09-11 11:27:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_image: unable to deal with image IDs,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_image` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to delete an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ ansible -m docker_image -a 'name=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 state=absent' localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_image -a 'name=alpine state=absent' localhost`. ``` localhost | SUCCESS => { ""actions"": [ ""Removed image sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3"" ], ""changed"": true, ""image"": { ""state"": ""Deleted"" } } ``` ##### ACTUAL RESULTS Instead no image is deleted. ``` localhost | SUCCESS => { ""actions"": [], ""changed"": false, ""image"": {} } ``` ",True,"docker_image: unable to deal with image IDs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_image` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to delete an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ ansible -m docker_image -a 'name=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 state=absent' localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_image -a 'name=alpine state=absent' localhost`. ``` localhost | SUCCESS => { ""actions"": [ ""Removed image sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3"" ], ""changed"": true, ""image"": { ""state"": ""Deleted"" } } ``` ##### ACTUAL RESULTS Instead no image is deleted. ``` localhost | SUCCESS => { ""actions"": [], ""changed"": false, ""image"": {} } ``` ",1,docker image unable to deal with image ids issue type bug report component name docker image ansible version ansible config file home schwarz code infrastructure ansible cfg configured module search path default w o overrides configuration n a os environment debian gnu linux summary docker allows addressing images by id ansible should do the same otherwise it s impossible to delete an unnamed image steps to reproduce sh docker pull alpine docker inspect format id alpine ansible m docker image a name state absent localhost expected results the output should be the same as from ansible m docker image a name alpine state absent localhost localhost success actions removed image changed true image state deleted actual results instead no image is deleted localhost success actions changed false image ,1 963,4706289311.0,IssuesEvent,2016-10-13 16:42:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible-modules-core/network/ - Code review,affects_2.2 bug_report in progress networking P1 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_facts ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 70e63ddf6c) last updated 2016/09/15 10:17:19 (GMT +100) lib/ansible/modules/core: (devel 683e5e4d1a) last updated 2016/09/15 10:17:22 (GMT +100) lib/ansible/modules/extras: (devel 170adf16bd) last updated 2016/09/15 10:17:23 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I've raised one issue to track all the issues found rather than having a fairly bitty chain of tickets. If it's easier for you to raise different PRs to address the issues found I'm no issue with that - whatever is easiest for you. I'm wondering if for items we are happy with we should add ignore markers in, as shown here http://stackoverflow.com/questions/28829236/is-it-possible-to-ignore-one-single-specific-line-with-pylint ``` pylint -E network/*/* No config file found, using default configuration ************* Module ansible.modules.core.network.nxos.nxos_hsrp E:402,41: Undefined variable 'module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_interface E:147,19: Instance of 'list' has no 'split' member (no-member) E:535,56: Undefined variable 'command' (undefined-variable) E:581,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_static_route E:158,23: Instance of 'CustomNetworkConfig' has no 'to_lines' member (no-member) E:295,26: Instance of 'list' has no 'split' member (no-member) E:402,66: Using variable 'address' before assignment (used-before-assignment) ************* Module ansible.modules.core.network.nxos.nxos_switchport E:486,56: Undefined variable 'command' (undefined-variable) E:527,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_vrf E:501,52: Undefined variable 'cmds' (undefined-variable) ``` ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",True,"ansible-modules-core/network/ - Code review - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_facts ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 70e63ddf6c) last updated 2016/09/15 10:17:19 (GMT +100) lib/ansible/modules/core: (devel 683e5e4d1a) last updated 2016/09/15 10:17:22 (GMT +100) lib/ansible/modules/extras: (devel 170adf16bd) last updated 2016/09/15 10:17:23 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I've raised one issue to track all the issues found rather than having a fairly bitty chain of tickets. If it's easier for you to raise different PRs to address the issues found I'm no issue with that - whatever is easiest for you. I'm wondering if for items we are happy with we should add ignore markers in, as shown here http://stackoverflow.com/questions/28829236/is-it-possible-to-ignore-one-single-specific-line-with-pylint ``` pylint -E network/*/* No config file found, using default configuration ************* Module ansible.modules.core.network.nxos.nxos_hsrp E:402,41: Undefined variable 'module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_interface E:147,19: Instance of 'list' has no 'split' member (no-member) E:535,56: Undefined variable 'command' (undefined-variable) E:581,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_static_route E:158,23: Instance of 'CustomNetworkConfig' has no 'to_lines' member (no-member) E:295,26: Instance of 'list' has no 'split' member (no-member) E:402,66: Using variable 'address' before assignment (used-before-assignment) ************* Module ansible.modules.core.network.nxos.nxos_switchport E:486,56: Undefined variable 'command' (undefined-variable) E:527,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_vrf E:501,52: Undefined variable 'cmds' (undefined-variable) ``` ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",1,ansible modules core network code review issue type bug report component name eos facts ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary i ve raised one issue to track all the issues found rather than having a fairly bitty chain of tickets if it s easier for you to raise different prs to address the issues found i m no issue with that whatever is easiest for you i m wondering if for items we are happy with we should add ignore markers in as shown here pylint e network no config file found using default configuration module ansible modules core network nxos nxos hsrp e undefined variable module undefined variable module ansible modules core network nxos nxos interface e instance of list has no split member no member e undefined variable command undefined variable e undefined variable get module undefined variable module ansible modules core network nxos nxos static route e instance of customnetworkconfig has no to lines member no member e instance of list has no split member no member e using variable address before assignment used before assignment module ansible modules core network nxos nxos switchport e undefined variable command undefined variable e undefined variable get module undefined variable module ansible modules core network nxos nxos vrf e undefined variable cmds undefined variable steps to reproduce expected results actual results ,1 1780,6575830198.0,IssuesEvent,2017-09-11 17:29:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,hg: updating to empty changeset (a.k.a. revision -1 or null) doesn't work,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME hg ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/argh/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from Debian GNU/Linux testing, managing Ubuntu 14.04.5 LTS. ##### SUMMARY Pulling all changes and updating a Mercurial repo to empty changeset (for backup/mirroring purposes, for example) used to work(?) with `revision: null`, now it doesn't. It also doesn't work with `revision: -1`, but works with `revision: 'null'` (i.e. with quotes). ##### STEPS TO REPRODUCE ``` yaml - name: Pull an example repo hg: repo: https://www.mercurial-scm.org/repo/hello/ dest: /tmp/hello revision: null # or -1 ``` ##### EXPECTED RESULTS The repo is cloned/pulled, and only contains .hg/ directory with file history, but no files in working directory. At the initial clone the report with -vv should say `{... ""before"": """", ""after"": ""000000000000 default"" ...}`. ##### ACTUAL RESULTS Cloning/pulling works fine, but there are `Makefile` and `hello.c`, updated to the latest revision. Also, with -vv the report says `{... ""before"": """", ""after"": ""82e55d328c8c default tip"" ...}`. ",True,"hg: updating to empty changeset (a.k.a. revision -1 or null) doesn't work - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME hg ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/argh/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from Debian GNU/Linux testing, managing Ubuntu 14.04.5 LTS. ##### SUMMARY Pulling all changes and updating a Mercurial repo to empty changeset (for backup/mirroring purposes, for example) used to work(?) with `revision: null`, now it doesn't. It also doesn't work with `revision: -1`, but works with `revision: 'null'` (i.e. with quotes). ##### STEPS TO REPRODUCE ``` yaml - name: Pull an example repo hg: repo: https://www.mercurial-scm.org/repo/hello/ dest: /tmp/hello revision: null # or -1 ``` ##### EXPECTED RESULTS The repo is cloned/pulled, and only contains .hg/ directory with file history, but no files in working directory. At the initial clone the report with -vv should say `{... ""before"": """", ""after"": ""000000000000 default"" ...}`. ##### ACTUAL RESULTS Cloning/pulling works fine, but there are `Makefile` and `hello.c`, updated to the latest revision. Also, with -vv the report says `{... ""before"": """", ""after"": ""82e55d328c8c default tip"" ...}`. ",1,hg updating to empty changeset a k a revision or null doesn t work issue type bug report component name hg ansible version ansible config file home argh ansible cfg configured module search path default w o overrides configuration os environment running from debian gnu linux testing managing ubuntu lts summary pulling all changes and updating a mercurial repo to empty changeset for backup mirroring purposes for example used to work with revision null now it doesn t it also doesn t work with revision but works with revision null i e with quotes steps to reproduce yaml name pull an example repo hg repo dest tmp hello revision null or expected results the repo is cloned pulled and only contains hg directory with file history but no files in working directory at the initial clone the report with vv should say before after default actual results cloning pulling works fine but there are makefile and hello c updated to the latest revision also with vv the report says before after default tip ,1 1458,6306739670.0,IssuesEvent,2017-07-21 21:55:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Verify that cron_file argument is valid.,affects_2.0 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME cron ##### ANSIBLE VERSION ``` ansible 2.0.0.2 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY Cron will only read cron-files without . (dots) in them, particularly `[The filenames] must consist solely of upper- and lower-case letters, digits, underscores, and hyphens. This means that they cannot contain any dots.` It would be nice if 1) The cron-module gave a warning if the user ran cron_file with an illegal filename, and 2) this were mentioned in the documentation of the cron-module, and 3) the cron-module had a ""sanitise"" option which would replace dots with some optional character. Point 1 is the most important, it will save others from the going through the same debugging as we had to ;-) ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"Verify that cron_file argument is valid. - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME cron ##### ANSIBLE VERSION ``` ansible 2.0.0.2 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY Cron will only read cron-files without . (dots) in them, particularly `[The filenames] must consist solely of upper- and lower-case letters, digits, underscores, and hyphens. This means that they cannot contain any dots.` It would be nice if 1) The cron-module gave a warning if the user ran cron_file with an illegal filename, and 2) this were mentioned in the documentation of the cron-module, and 3) the cron-module had a ""sanitise"" option which would replace dots with some optional character. Point 1 is the most important, it will save others from the going through the same debugging as we had to ;-) ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,verify that cron file argument is valid issue type feature idea component name cron ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary cron will only read cron files without dots in them particularly must consist solely of upper and lower case letters digits underscores and hyphens this means that they cannot contain any dots it would be nice if the cron module gave a warning if the user ran cron file with an illegal filename and this were mentioned in the documentation of the cron module and the cron module had a sanitise option which would replace dots with some optional character point is the most important it will save others from the going through the same debugging as we had to steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results ,1 1452,6292573384.0,IssuesEvent,2017-07-20 06:14:37,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add a YAML file module?,affects_2.3 feature_idea waiting_on_maintainer,"Hi, There is already a module to tweak settings in ini files. But many softwares already use YAML configuration files, and as Ansible uses them for its own backend, I think it could be possible to add a module to handle settings in YAML files as well, at little cost. This could be especially interesting as YAML syntax is prompt to multilines and cannot be parsed easily with lineinfile module. What do you think about it? Thanks ",True,"Add a YAML file module? - Hi, There is already a module to tweak settings in ini files. But many softwares already use YAML configuration files, and as Ansible uses them for its own backend, I think it could be possible to add a module to handle settings in YAML files as well, at little cost. This could be especially interesting as YAML syntax is prompt to multilines and cannot be parsed easily with lineinfile module. What do you think about it? Thanks ",1,add a yaml file module hi there is already a module to tweak settings in ini files but many softwares already use yaml configuration files and as ansible uses them for its own backend i think it could be possible to add a module to handle settings in yaml files as well at little cost this could be especially interesting as yaml syntax is prompt to multilines and cannot be parsed easily with lineinfile module what do you think about it thanks ,1 1803,6575924513.0,IssuesEvent,2017-09-11 17:51:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Nodes Unreachable after Increasing Forks and File Descriptors ansible 2.1.0.0,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/jenkins/workspace/keys-test/stack_info/ansiblehosts/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION added forks=13 in ansible.cfg increased File descriptors to 8192 ##### OS / ENVIRONMENT ubuntu 14.04 (From and managed) ##### SUMMARY We Increased the number of forks from default 5 to 20 and for that we increased the File Descriptors (hard and soft limit to 8192) But we have been seeing intermittent failures. 2 out of three times the job goes through successfully, also by decreasing the number of forks we dont see failures. With forks 11 or higher we see intermittent failures. There might be a file descriptor leak. Please advise- ##### STEPS TO REPRODUCE added forks=13 under [defaults] and ran a sample playbook that runs a simple command on the managed nodes ``` ansible-playbook -vvvv -i /home/jenkins/workspace/keys-test/stack_info/ansiblehosts/hosts/hosts_all.ini /home/jenkins/workspace/keys-test/test.yml - hosts: - cassandra_node_* - analytic_node_* - analytic2_node_1 - solr_node_* user: ubuntu sudo: yes tasks: - name: Test Ansible Connection command: ls /home/ubuntu ~ ``` ##### EXPECTED RESULTS All hosts should be reachable ##### ACTUAL RESULTS 2 hosts unreachable ``` 18:12:02 <10.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: ubuntu 18:12:02 <10.xx.xx.xx> SSH: EXEC ssh -C -vvv -o ControlPersist=15m -F ssh.config -q -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o 'ControlPath=~/.ssh/mux-%r@%h:%p' 10.xx.xx.xx '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.59-107361301033999 `"" && echo ansible-tmp-1474049522.59-107361301033999=""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.59-107361301033999 `"" ) && sleep 0'""'""'' 18:12:02 fatal: [10.26.41.4]: UNREACHABLE! => {""changed"": false, ""msg"": ""Failed to connect to the host via ssh."", ""unreachable"": true} 18:12:02 <10.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: ubuntu 18:12:02 <10.xx.xx.xx> SSH: EXEC ssh -C -vvv -o ControlPersist=15m -F ssh.config -q -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o 'ControlPath=~/.ssh/mux-%r@%h:%p' 10.xx.xx.xx '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.84-214871160776213 `"" && echo ansible-tmp-1474049522.84-214871160776213=""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.84-214871160776213 `"" ) && sleep 0'""'""'' 18:12:03 fatal: [10.xx.xx.xx]: UNREACHABLE! => {""changed"": false, ""msg"": ""Failed to connect to the host via ssh."", ""unreachable"": ``` ",True,"Nodes Unreachable after Increasing Forks and File Descriptors ansible 2.1.0.0 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/jenkins/workspace/keys-test/stack_info/ansiblehosts/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION added forks=13 in ansible.cfg increased File descriptors to 8192 ##### OS / ENVIRONMENT ubuntu 14.04 (From and managed) ##### SUMMARY We Increased the number of forks from default 5 to 20 and for that we increased the File Descriptors (hard and soft limit to 8192) But we have been seeing intermittent failures. 2 out of three times the job goes through successfully, also by decreasing the number of forks we dont see failures. With forks 11 or higher we see intermittent failures. There might be a file descriptor leak. Please advise- ##### STEPS TO REPRODUCE added forks=13 under [defaults] and ran a sample playbook that runs a simple command on the managed nodes ``` ansible-playbook -vvvv -i /home/jenkins/workspace/keys-test/stack_info/ansiblehosts/hosts/hosts_all.ini /home/jenkins/workspace/keys-test/test.yml - hosts: - cassandra_node_* - analytic_node_* - analytic2_node_1 - solr_node_* user: ubuntu sudo: yes tasks: - name: Test Ansible Connection command: ls /home/ubuntu ~ ``` ##### EXPECTED RESULTS All hosts should be reachable ##### ACTUAL RESULTS 2 hosts unreachable ``` 18:12:02 <10.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: ubuntu 18:12:02 <10.xx.xx.xx> SSH: EXEC ssh -C -vvv -o ControlPersist=15m -F ssh.config -q -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o 'ControlPath=~/.ssh/mux-%r@%h:%p' 10.xx.xx.xx '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.59-107361301033999 `"" && echo ansible-tmp-1474049522.59-107361301033999=""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.59-107361301033999 `"" ) && sleep 0'""'""'' 18:12:02 fatal: [10.26.41.4]: UNREACHABLE! => {""changed"": false, ""msg"": ""Failed to connect to the host via ssh."", ""unreachable"": true} 18:12:02 <10.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: ubuntu 18:12:02 <10.xx.xx.xx> SSH: EXEC ssh -C -vvv -o ControlPersist=15m -F ssh.config -q -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o 'ControlPath=~/.ssh/mux-%r@%h:%p' 10.xx.xx.xx '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.84-214871160776213 `"" && echo ansible-tmp-1474049522.84-214871160776213=""` echo $HOME/.ansible/tmp/ansible-tmp-1474049522.84-214871160776213 `"" ) && sleep 0'""'""'' 18:12:03 fatal: [10.xx.xx.xx]: UNREACHABLE! => {""changed"": false, ""msg"": ""Failed to connect to the host via ssh."", ""unreachable"": ``` ",1,nodes unreachable after increasing forks and file descriptors ansible issue type bug report component name ansible version ansible config file home jenkins workspace keys test stack info ansiblehosts ansible cfg configured module search path default w o overrides configuration added forks in ansible cfg increased file descriptors to os environment ubuntu from and managed summary we increased the number of forks from default to and for that we increased the file descriptors hard and soft limit to but we have been seeing intermittent failures out of three times the job goes through successfully also by decreasing the number of forks we dont see failures with forks or higher we see intermittent failures there might be a file descriptor leak please advise steps to reproduce added forks under and ran a sample playbook that runs a simple command on the managed nodes ansible playbook vvvv i home jenkins workspace keys test stack info ansiblehosts hosts hosts all ini home jenkins workspace keys test test yml hosts cassandra node analytic node node solr node user ubuntu sudo yes tasks name test ansible connection command ls home ubuntu expected results all hosts should be reachable actual results hosts unreachable establish ssh connection for user ubuntu ssh exec ssh c vvv o controlpersist f ssh config q o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath ssh mux r h p xx xx xx bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep fatal unreachable changed false msg failed to connect to the host via ssh unreachable true establish ssh connection for user ubuntu ssh exec ssh c vvv o controlpersist f ssh config q o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath ssh mux r h p xx xx xx bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep fatal unreachable changed false msg failed to connect to the host via ssh unreachable ,1 857,4525194418.0,IssuesEvent,2016-09-07 03:17:35,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Unable to use vmware_datacenter module,bug_report cloud vmware waiting_on_maintainer,"Issue type: Bug report Ansible version: 2.1.0 devel 6bf2f45 Ansible configuration: Default Environment: Kubuntu 15.10 (4.2.0-25-generic) Summary: I'm trying to use vmware_datacenter module to create a datacenter inside of the VMware. In fact, datacenter is created, but task finishes with an error: ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 2308, in main() File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 171, in main datacenter_states[desired_state][current_state](module) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 141, in state_exit_unchanged module.exit_json(changed=False) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 1749, in exit_json kwargs = remove_values(kwargs, self.no_log_values) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 600, in remove_values raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) TypeError: Value of unknown type: , 'vim.Datacenter:datacenter-22' ``` Steps to reproduce: Example of the role: ```yaml --- - name: Create datacenter local_action: vmware_datacenter hostname=""vcenter_hostname"" username=""vcenter_user"" password=""vcenter_password"" datacenter_name=""datacenter_name"" state=present ``` Expected result: Task should finish without errors. Actual results: Datacenter is created, but task finishes with an error. Thanks.",True,"Unable to use vmware_datacenter module - Issue type: Bug report Ansible version: 2.1.0 devel 6bf2f45 Ansible configuration: Default Environment: Kubuntu 15.10 (4.2.0-25-generic) Summary: I'm trying to use vmware_datacenter module to create a datacenter inside of the VMware. In fact, datacenter is created, but task finishes with an error: ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 2308, in main() File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 171, in main datacenter_states[desired_state][current_state](module) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 141, in state_exit_unchanged module.exit_json(changed=False) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 1749, in exit_json kwargs = remove_values(kwargs, self.no_log_values) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 591, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter"", line 600, in remove_values raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) TypeError: Value of unknown type: , 'vim.Datacenter:datacenter-22' ``` Steps to reproduce: Example of the role: ```yaml --- - name: Create datacenter local_action: vmware_datacenter hostname=""vcenter_hostname"" username=""vcenter_user"" password=""vcenter_password"" datacenter_name=""datacenter_name"" state=present ``` Expected result: Task should finish without errors. Actual results: Datacenter is created, but task finishes with an error. Thanks.",1,unable to use vmware datacenter module issue type bug report ansible version devel ansible configuration default environment kubuntu generic summary i m trying to use vmware datacenter module to create a datacenter inside of the vmware in fact datacenter is created but task finishes with an error an exception occurred during task execution the full traceback is traceback most recent call last file home kamil ansible tmp ansible tmp vmware datacenter line in main file home kamil ansible tmp ansible tmp vmware datacenter line in main datacenter states module file home kamil ansible tmp ansible tmp vmware datacenter line in state exit unchanged module exit json changed false file home kamil ansible tmp ansible tmp vmware datacenter line in exit json kwargs remove values kwargs self no log values file home kamil ansible tmp ansible tmp vmware datacenter line in remove values return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in remove values return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in remove values return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in remove values raise typeerror value of unknown type s s type value value typeerror value of unknown type vim datacenter datacenter steps to reproduce example of the role yaml name create datacenter local action vmware datacenter hostname vcenter hostname username vcenter user password vcenter password datacenter name datacenter name state present expected result task should finish without errors actual results datacenter is created but task finishes with an error thanks ,1 1874,6577499684.0,IssuesEvent,2017-09-12 01:20:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker module has wrong/misleading error handling during container creation,affects_2.0 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` and current devel ##### CONFIGURATION standard ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When docker module fails to start a container for any reason it tries to fix it by pulling the image from hub and start again. But if the image is local, module will fail with pull error instead of actual start error. ##### STEPS TO REPRODUCE 1. Create local image with `docker build` or `docker commit` and tag it as `my/test:latest` 2. Try to start container giving wrong network name: ``` docker: name: 'test' image: 'my/test:latest' state: 'restarted' net: 'bad_network_name' ``` ##### EXPECTED RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: network bad_network_name not found""} ``` ##### ACTUAL RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""changes"": [""{\""status\"":\""Pulling repository docker.io/my/test\""}\r\n"", ""{\""errorDetail\"":{\""message\"":\""Error: image my/test not found\""},\""error\"":\""Error: image my/test not found\""}\r\n""], ""failed"": true, ""msg"": ""Unrecognized status from pull."", ""status"": """"} ``` [Part of code in question](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker.py#L1658) – try/except block with do_create. Why do we try to pull the image if we get 404 response code? As per [docker api docs](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22) response codes for `containers/create` endpoint are too general to make such decisions: - 201 – no error - 404 – no such container - 406 – impossible to attach (container not running) - 500 – server error ",True,"docker module has wrong/misleading error handling during container creation - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` and current devel ##### CONFIGURATION standard ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When docker module fails to start a container for any reason it tries to fix it by pulling the image from hub and start again. But if the image is local, module will fail with pull error instead of actual start error. ##### STEPS TO REPRODUCE 1. Create local image with `docker build` or `docker commit` and tag it as `my/test:latest` 2. Try to start container giving wrong network name: ``` docker: name: 'test' image: 'my/test:latest' state: 'restarted' net: 'bad_network_name' ``` ##### EXPECTED RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: network bad_network_name not found""} ``` ##### ACTUAL RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""changes"": [""{\""status\"":\""Pulling repository docker.io/my/test\""}\r\n"", ""{\""errorDetail\"":{\""message\"":\""Error: image my/test not found\""},\""error\"":\""Error: image my/test not found\""}\r\n""], ""failed"": true, ""msg"": ""Unrecognized status from pull."", ""status"": """"} ``` [Part of code in question](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker.py#L1658) – try/except block with do_create. Why do we try to pull the image if we get 404 response code? As per [docker api docs](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22) response codes for `containers/create` endpoint are too general to make such decisions: - 201 – no error - 404 – no such container - 406 – impossible to attach (container not running) - 500 – server error ",1,docker module has wrong misleading error handling during container creation issue type bug report component name docker module ansible version ansible and current devel configuration standard os environment ubuntu summary when docker module fails to start a container for any reason it tries to fix it by pulling the image from hub and start again but if the image is local module will fail with pull error instead of actual start error steps to reproduce create local image with docker build or docker commit and tag it as my test latest try to start container giving wrong network name docker name test image my test latest state restarted net bad network name expected results fatal failed changed false failed true msg docker api error network bad network name not found actual results fatal failed changed false changes failed true msg unrecognized status from pull status – try except block with do create why do we try to pull the image if we get response code as per response codes for containers create endpoint are too general to make such decisions – no error – no such container – impossible to attach container not running – server error ,1 1496,6478927151.0,IssuesEvent,2017-08-18 09:15:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure_rm_virualmashine issue,affects_2.1 azure bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME azure_rm_virtualmachine module ##### ANSIBLE VERSION ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT fedora 23 ##### SUMMARY ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` ##### EXPECTED RESULTS creatiion of VM. ##### ACTUAL RESULTS PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" && echo ansible-tmp-1470326423.51-208881287834045=""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in **init** for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in **init**\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"azure_rm_virualmashine issue - ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME azure_rm_virtualmachine module ##### ANSIBLE VERSION ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT fedora 23 ##### SUMMARY ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` ##### EXPECTED RESULTS creatiion of VM. ##### ACTUAL RESULTS PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" && echo ansible-tmp-1470326423.51-208881287834045=""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in **init** for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in **init**\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,azure rm virualmashine issue issue type bug report feature idea documentation report component name azure rm virtualmachine module ansible version ansible noarch configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables python modules azure os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific fedora summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost connection local gather facts false become false vars files environments azure azure credentials encrypted yml inventory environments azure azure credentials encrypted temp passwd yml vars roles create azure vm and roles create azure vm main yml name create vm with defaults azure rm virtualmachine resource group testing name admin username test user admin password test vm image offer centos publisher openlogic sku version latest expected results creatiion of vm actual results playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed ,1 1704,6574406212.0,IssuesEvent,2017-09-11 12:46:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt module fail to upgrade installed only packages list if one package isn't available,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION 2.1.1.0 ##### CONFIGURATION standard ##### OS / ENVIRONMENT Debian ##### SUMMARY I use apt module to perform upgrades on a list of packages and make sure only installed packages are upgraded: - name: upgrade ""{{packages}}"" become: true become_user: root apt: name={{item}} state=latest update_cache=no only_upgrade=yes with_items: ""{{packages}}"" {{packages}} is a list of packages to upgrade if they are installed but if one of the package is *not available* (and so not installed), the whole upgrade fail with ""No package matching 'xxxxxx' is available"" I think the package_status method in apt modules should return one more information named ""available"". Then in ""install"" method, we can skip non available packages in the same way we skip non installed packages for only_upgrade=yes. ##### STEPS TO REPRODUCE Execute a playbook with content - name: upgrade ""{{packages}}"" become: true become_user: root apt: name={{item}} state=latest update_cache=no only_upgrade=yes with_items: ""{{packages}}"" ona debian jessie machine with args: ansible-playbook -l ""online"" playbooks/upgrade-packages.yml -e '{ ""packages"": [ ""upgradable installed package"", ""non upgradable installed package"", ""non installed package"", ""non available package"", ] }' ##### EXPECTED RESULTS ""upgradable installed package"": should be upgraded ""non upgradable installed package"": shouldn't be changed ""non installed package"": should not be installed ""non available package"": should be ignored ##### ACTUAL RESULTS no packages are upgraded with result: ""No package matching 'non available package' is available"" ",True,"apt module fail to upgrade installed only packages list if one package isn't available - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION 2.1.1.0 ##### CONFIGURATION standard ##### OS / ENVIRONMENT Debian ##### SUMMARY I use apt module to perform upgrades on a list of packages and make sure only installed packages are upgraded: - name: upgrade ""{{packages}}"" become: true become_user: root apt: name={{item}} state=latest update_cache=no only_upgrade=yes with_items: ""{{packages}}"" {{packages}} is a list of packages to upgrade if they are installed but if one of the package is *not available* (and so not installed), the whole upgrade fail with ""No package matching 'xxxxxx' is available"" I think the package_status method in apt modules should return one more information named ""available"". Then in ""install"" method, we can skip non available packages in the same way we skip non installed packages for only_upgrade=yes. ##### STEPS TO REPRODUCE Execute a playbook with content - name: upgrade ""{{packages}}"" become: true become_user: root apt: name={{item}} state=latest update_cache=no only_upgrade=yes with_items: ""{{packages}}"" ona debian jessie machine with args: ansible-playbook -l ""online"" playbooks/upgrade-packages.yml -e '{ ""packages"": [ ""upgradable installed package"", ""non upgradable installed package"", ""non installed package"", ""non available package"", ] }' ##### EXPECTED RESULTS ""upgradable installed package"": should be upgraded ""non upgradable installed package"": shouldn't be changed ""non installed package"": should not be installed ""non available package"": should be ignored ##### ACTUAL RESULTS no packages are upgraded with result: ""No package matching 'non available package' is available"" ",1,apt module fail to upgrade installed only packages list if one package isn t available issue type bug report component name apt ansible version configuration standard os environment debian summary i use apt module to perform upgrades on a list of packages and make sure only installed packages are upgraded name upgrade packages become true become user root apt name item state latest update cache no only upgrade yes with items packages packages is a list of packages to upgrade if they are installed but if one of the package is not available and so not installed the whole upgrade fail with no package matching xxxxxx is available i think the package status method in apt modules should return one more information named available then in install method we can skip non available packages in the same way we skip non installed packages for only upgrade yes steps to reproduce execute a playbook with content name upgrade packages become true become user root apt name item state latest update cache no only upgrade yes with items packages ona debian jessie machine with args ansible playbook l online playbooks upgrade packages yml e packages expected results upgradable installed package should be upgraded non upgradable installed package shouldn t be changed non installed package should not be installed non available package should be ignored actual results no packages are upgraded with result no package matching non available package is available ,1 1226,5218843895.0,IssuesEvent,2017-01-26 17:27:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apache2_module fails for PHP 5.6 even though it is already enabled,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/nick/Workspace/-redacted-/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION hostfile & roles_path ##### OS / ENVIRONMENT Running Ansible on macOS Sierra, target server is Ubuntu Xenial ##### SUMMARY Enabling the Apache2 module ""[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)"" with apache2_module fails even though the module is already enabled. This is the same problem as #5559 and #4744 but with a different package. This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`. ##### STEPS TO REPRODUCE ``` - name: Enable PHP 5.6 apache2_module: state=present name=php5.6 ``` ##### ACTUAL RESULTS ``` failed: [nicksherlock.com] (item=php5.6) => { ""failed"": true, ""invocation"": { ""module_args"": { ""force"": false, ""name"": ""php5.6"", ""state"": ""present"" }, ""module_name"": ""apache2_module"" }, ""item"": ""php5.6"", ""msg"": ""Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""rc"": 0, ""stderr"": """", ""stdout"": ""Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""stdout_lines"": [ ""Considering dependency mpm_prefork for php5.6:"", ""Considering conflict mpm_event for mpm_prefork:"", ""Considering conflict mpm_worker for mpm_prefork:"", ""Module mpm_prefork already enabled"", ""Considering conflict php5 for php5.6:"", ""Module php5.6 already enabled"" ] } ``` Running it manually on the server gives: ``` # a2enmod php5.6 Considering dependency mpm_prefork for php5.6: Considering conflict mpm_event for mpm_prefork: Considering conflict mpm_worker for mpm_prefork: Module mpm_prefork already enabled Considering conflict php5 for php5.6: Module php5.6 already enabled # echo $? 0 ``` This is php5.6.load: ``` # Conflicts: php5 # Depends: mpm_prefork LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so ``` Note that manually running ""a2enmod php5.6"" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex? What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module? It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file.",True,"apache2_module fails for PHP 5.6 even though it is already enabled - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/nick/Workspace/-redacted-/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION hostfile & roles_path ##### OS / ENVIRONMENT Running Ansible on macOS Sierra, target server is Ubuntu Xenial ##### SUMMARY Enabling the Apache2 module ""[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)"" with apache2_module fails even though the module is already enabled. This is the same problem as #5559 and #4744 but with a different package. This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`. ##### STEPS TO REPRODUCE ``` - name: Enable PHP 5.6 apache2_module: state=present name=php5.6 ``` ##### ACTUAL RESULTS ``` failed: [nicksherlock.com] (item=php5.6) => { ""failed"": true, ""invocation"": { ""module_args"": { ""force"": false, ""name"": ""php5.6"", ""state"": ""present"" }, ""module_name"": ""apache2_module"" }, ""item"": ""php5.6"", ""msg"": ""Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""rc"": 0, ""stderr"": """", ""stdout"": ""Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""stdout_lines"": [ ""Considering dependency mpm_prefork for php5.6:"", ""Considering conflict mpm_event for mpm_prefork:"", ""Considering conflict mpm_worker for mpm_prefork:"", ""Module mpm_prefork already enabled"", ""Considering conflict php5 for php5.6:"", ""Module php5.6 already enabled"" ] } ``` Running it manually on the server gives: ``` # a2enmod php5.6 Considering dependency mpm_prefork for php5.6: Considering conflict mpm_event for mpm_prefork: Considering conflict mpm_worker for mpm_prefork: Module mpm_prefork already enabled Considering conflict php5 for php5.6: Module php5.6 already enabled # echo $? 0 ``` This is php5.6.load: ``` # Conflicts: php5 # Depends: mpm_prefork LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so ``` Note that manually running ""a2enmod php5.6"" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex? What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module? It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file.",1, module fails for php even though it is already enabled issue type bug report component name module ansible version ansible config file users nick workspace redacted ansible cfg configured module search path default w o overrides configuration hostfile roles path os environment running ansible on macos sierra target server is ubuntu xenial summary enabling the module with module fails even though the module is already enabled this is the same problem as and but with a different package this module is called but identifies itself in m as module steps to reproduce name enable php module state present name actual results failed item failed true invocation module args force false name state present module name module item msg failed to set module to enabled considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n rc stderr stdout considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n stdout lines considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled running it manually on the server gives considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled echo this is load conflicts depends mpm prefork loadmodule module usr lib modules so note that manually running on the server directly gives a exit status to signal success can t module just check that instead of doing parsing with a regex what if i wanted several sets of conf files in mods available for the same module e g php prod load php dev load both loading the same module but with different config wouldn t that make it impossible for ansible to manage those with module it just seems odd that ansible requires that the module s binary name be the same as the name of its load file ,1 1001,4770650999.0,IssuesEvent,2016-10-26 15:47:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,File module calls /usr/bin/python to execute temp_name.ps1 PowerShell script on Windows,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Builtin **File** module ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION pristine ##### OS / ENVIRONMENT Host: Ubuntu 14.04, managed node: Windows Server 2012 R2 ##### SUMMARY I got an error and had turned verbose mode, so I saw that output: ``` TASK [Copy *.msi files from ./MSI to C:\MSI] *********************************** task path: /home/qaexpert/ansible-lab/tcagent.yml:8 ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO agentsmith EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1477410445.62-187863101456896"").FullName | Write-Host -Separator ''; PUT ""/tmp/tmpqOJYen"" TO ""C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1"" EXEC Set-StrictMode -Version Latest Try { /usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1' } Catch { ``` Here is the strange line: `/usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1'` Obviously `/usr/bin/python` couldn't exist on Windows server. ##### STEPS TO REPRODUCE Here is my playbook: ``` tasks: # Copy MSI files - name: Copy *.msi files from ./MSI to C:\MSI file: path=C:\MSI state=directory ``` ##### EXPECTED RESULTS I expect the taks to create the folder C:\MSI ##### ACTUAL RESULTS It reports the error: ``` TASK [Copy *.msi files from ./MSI to C:\MSI] *********************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: + ~~~~~~~~~~~~~~~ fatal: [agentsmith]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""The term '/usr/bin/python' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.""} ``` ",True,"File module calls /usr/bin/python to execute temp_name.ps1 PowerShell script on Windows - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Builtin **File** module ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION pristine ##### OS / ENVIRONMENT Host: Ubuntu 14.04, managed node: Windows Server 2012 R2 ##### SUMMARY I got an error and had turned verbose mode, so I saw that output: ``` TASK [Copy *.msi files from ./MSI to C:\MSI] *********************************** task path: /home/qaexpert/ansible-lab/tcagent.yml:8 ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO agentsmith EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1477410445.62-187863101456896"").FullName | Write-Host -Separator ''; PUT ""/tmp/tmpqOJYen"" TO ""C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1"" EXEC Set-StrictMode -Version Latest Try { /usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1' } Catch { ``` Here is the strange line: `/usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1'` Obviously `/usr/bin/python` couldn't exist on Windows server. ##### STEPS TO REPRODUCE Here is my playbook: ``` tasks: # Copy MSI files - name: Copy *.msi files from ./MSI to C:\MSI file: path=C:\MSI state=directory ``` ##### EXPECTED RESULTS I expect the taks to create the folder C:\MSI ##### ACTUAL RESULTS It reports the error: ``` TASK [Copy *.msi files from ./MSI to C:\MSI] *********************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: + ~~~~~~~~~~~~~~~ fatal: [agentsmith]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""The term '/usr/bin/python' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.""} ``` ",1,file module calls usr bin python to execute temp name powershell script on windows issue type bug report component name builtin file module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration pristine os environment host ubuntu managed node windows server summary i got an error and had turned verbose mode so i saw that output task task path home qaexpert ansible lab tcagent yml establish winrm connection for user administrator on port to agentsmith exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator put tmp tmpqojyen to c users administrator appdata local temp ansible tmp file exec set strictmode version latest try usr bin python c users administrator appdata local temp ansible tmp file catch here is the strange line usr bin python c users administrator appdata local temp ansible tmp file obviously usr bin python couldn t exist on windows server steps to reproduce here is my playbook tasks copy msi files name copy msi files from msi to c msi file path c msi state directory expected results i expect the taks to create the folder c msi actual results it reports the error task an exception occurred during task execution to see the full traceback use vvv the error was fatal failed changed false failed true msg the term usr bin python is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again ,1 1353,5829570324.0,IssuesEvent,2017-05-08 14:52:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,[Regression] ios_command module failing with paramiko.hostkeys.InvalidHostKey,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `ios_command` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/rob/code/ansible/anchor-ansible/ansible.cfg configured module search path = ['../ntc-ansible/library', '../napalm-ansible/library'] ``` ##### CONFIGURATION [defaults] host_key_checking = False inventory = ./hosts library = ../ntc-ansible/library:../napalm-ansible/library log_path = ./logfile retry_files_save_path = ./retry/ forks = 50 [paramiko_connection] record_host_keys = False [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ##### OS / ENVIRONMENT Ansible Host Machine: Ubuntu 16.04 Connecting to Cisco routers/switches ##### SUMMARY Using the below playbook to pull running configs from a number of cisco devices, I keep getting the below errors for many fo the devices. ``` ``` This is only part of the error output. These are devices I can successfully ssh to manually and with third party modules. Also note that the invalid host key is the same for each different device. ``` paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) ``` ##### STEPS TO REPRODUCE ``` --- - name: Rented Leasehold Superfast Routers Show Run to Local File gather_facts: no hosts: rl_superfast tasks: - name: Execute Show Run Command ios_command: provider: ""{{ provider }}"" commands: - show run register: output - name: Write Output to File template: src: output.txt.j2 dest: ""./files/show_files/rl_superfast/superfast/{{ ansible_host }}.txt"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py <10.250.0.38> ESTABLISH LOCAL CONNECTION FOR USER: rob <10.250.0.38> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `"" && echo ansible-tmp-1479917911.16-22520618528361=""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `"" ) && sleep 0' <10.250.0.38> PUT /tmp/tmpvgXT4L TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py <10.250.0.38> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py && sleep 0' <10.250.0.38> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py; rm -rf ""/home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_POTd5B/ansible_module_ios_command.py"", line 237, in main() File ""/tmp/ansible_POTd5B/ansible_module_ios_command.py"", line 200, in main runner.add_command(**cmd) File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 147, in add_command File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py"", line 117, in cli File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py"", line 76, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 101, in load_system_host_keys self._system_host_keys.load(filename) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 101, in load e = HostKeyEntry.from_line(line, lineno) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 341, in from_line raise InvalidHostKey(line, e) paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) fatal: [10.250.0.38]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_POTd5B/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_POTd5B/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 76, in open\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/client.py\"", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py <10.250.0.26> ESTABLISH LOCAL CONNECTION FOR USER: rob <10.250.0.26> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `"" && echo ansible-tmp-1479917911.73-151987012680090=""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `"" ) && sleep 0' <10.250.0.26> PUT /tmp/tmpMkESve TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py <10.250.0.26> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py && sleep 0' <10.250.0.26> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py; rm -rf ""/home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Wmwv85/ansible_module_ios_command.py"", line 237, in main() File ""/tmp/ansible_Wmwv85/ansible_module_ios_command.py"", line 200, in main runner.add_command(**cmd) File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 147, in add_command File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py"", line 117, in cli File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py"", line 76, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 101, in load_system_host_keys self._system_host_keys.load(filename) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 101, in load e = HostKeyEntry.from_line(line, lineno) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 341, in from_line raise InvalidHostKey(line, e) paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) fatal: [10.250.0.26]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Wmwv85/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_Wmwv85/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 76, in open\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/client.py\"", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py <10.250.0.27> ESTABLISH LOCAL CONNECTION FOR USER: rob <10.250.0.27> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `"" && echo ansible-tmp-1479917912.28-248435414010347=""` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `"" ) && sleep 0' <10.250.0.27> PUT /tmp/tmpVwd3xv TO /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py <10.250.0.27> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py && sleep 0' <10.250.0.27> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py; rm -rf ""/home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_N6os2U/ansible_module_ios_command.py"", line 237, in main() File ""/tmp/ansible_N6os2U/ansible_module_ios_command.py"", line 200, in main runner.add_command(**cmd) File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 147, in add_command File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py"", line 117, in cli File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py"", line 76, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 101, in load_system_host_keys self._system_host_keys.load(filename) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 101, in load e = HostKeyEntry.from_line(line, lineno) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 341, in from_line raise InvalidHostKey(line, e) paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) fatal: [10.250.0.27]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_N6os2U/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_N6os2U/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 76, in open\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/client.py\"", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ",True,"[Regression] ios_command module failing with paramiko.hostkeys.InvalidHostKey - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `ios_command` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/rob/code/ansible/anchor-ansible/ansible.cfg configured module search path = ['../ntc-ansible/library', '../napalm-ansible/library'] ``` ##### CONFIGURATION [defaults] host_key_checking = False inventory = ./hosts library = ../ntc-ansible/library:../napalm-ansible/library log_path = ./logfile retry_files_save_path = ./retry/ forks = 50 [paramiko_connection] record_host_keys = False [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ##### OS / ENVIRONMENT Ansible Host Machine: Ubuntu 16.04 Connecting to Cisco routers/switches ##### SUMMARY Using the below playbook to pull running configs from a number of cisco devices, I keep getting the below errors for many fo the devices. ``` ``` This is only part of the error output. These are devices I can successfully ssh to manually and with third party modules. Also note that the invalid host key is the same for each different device. ``` paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) ``` ##### STEPS TO REPRODUCE ``` --- - name: Rented Leasehold Superfast Routers Show Run to Local File gather_facts: no hosts: rl_superfast tasks: - name: Execute Show Run Command ios_command: provider: ""{{ provider }}"" commands: - show run register: output - name: Write Output to File template: src: output.txt.j2 dest: ""./files/show_files/rl_superfast/superfast/{{ ansible_host }}.txt"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py <10.250.0.38> ESTABLISH LOCAL CONNECTION FOR USER: rob <10.250.0.38> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `"" && echo ansible-tmp-1479917911.16-22520618528361=""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `"" ) && sleep 0' <10.250.0.38> PUT /tmp/tmpvgXT4L TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py <10.250.0.38> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py && sleep 0' <10.250.0.38> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py; rm -rf ""/home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_POTd5B/ansible_module_ios_command.py"", line 237, in main() File ""/tmp/ansible_POTd5B/ansible_module_ios_command.py"", line 200, in main runner.add_command(**cmd) File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 147, in add_command File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py"", line 117, in cli File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py"", line 76, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 101, in load_system_host_keys self._system_host_keys.load(filename) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 101, in load e = HostKeyEntry.from_line(line, lineno) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 341, in from_line raise InvalidHostKey(line, e) paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) fatal: [10.250.0.38]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_POTd5B/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_POTd5B/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 76, in open\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/client.py\"", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py <10.250.0.26> ESTABLISH LOCAL CONNECTION FOR USER: rob <10.250.0.26> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `"" && echo ansible-tmp-1479917911.73-151987012680090=""` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `"" ) && sleep 0' <10.250.0.26> PUT /tmp/tmpMkESve TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py <10.250.0.26> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py && sleep 0' <10.250.0.26> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py; rm -rf ""/home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Wmwv85/ansible_module_ios_command.py"", line 237, in main() File ""/tmp/ansible_Wmwv85/ansible_module_ios_command.py"", line 200, in main runner.add_command(**cmd) File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 147, in add_command File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py"", line 117, in cli File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py"", line 76, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 101, in load_system_host_keys self._system_host_keys.load(filename) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 101, in load e = HostKeyEntry.from_line(line, lineno) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 341, in from_line raise InvalidHostKey(line, e) paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) fatal: [10.250.0.26]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Wmwv85/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_Wmwv85/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 76, in open\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/client.py\"", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py <10.250.0.27> ESTABLISH LOCAL CONNECTION FOR USER: rob <10.250.0.27> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `"" && echo ansible-tmp-1479917912.28-248435414010347=""` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `"" ) && sleep 0' <10.250.0.27> PUT /tmp/tmpVwd3xv TO /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py <10.250.0.27> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py && sleep 0' <10.250.0.27> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py; rm -rf ""/home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_N6os2U/ansible_module_ios_command.py"", line 237, in main() File ""/tmp/ansible_N6os2U/ansible_module_ios_command.py"", line 200, in main runner.add_command(**cmd) File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 147, in add_command File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py"", line 117, in cli File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py"", line 76, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 101, in load_system_host_keys self._system_host_keys.load(filename) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 101, in load e = HostKeyEntry.from_line(line, lineno) File ""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py"", line 341, in from_line raise InvalidHostKey(line, e) paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',)) fatal: [10.250.0.27]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_N6os2U/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_N6os2U/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 76, in open\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/client.py\"", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \""/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\"", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ",1, ios command module failing with paramiko hostkeys invalidhostkey issue type bug report component name ios command ansible version ansible config file home rob code ansible anchor ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables host key checking false inventory hosts library ntc ansible library napalm ansible library log path logfile retry files save path retry forks record host keys false ssh args o controlmaster auto o controlpersist o userknownhostsfile dev null o stricthostkeychecking no os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible host machine ubuntu connecting to cisco routers switches summary using the below playbook to pull running configs from a number of cisco devices i keep getting the below errors for many fo the devices this is only part of the error output these are devices i can successfully ssh to manually and with third party modules also note that the invalid host key is the same for each different device paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name rented leasehold superfast routers show run to local file gather facts no hosts rl superfast tasks name execute show run command ios command provider provider commands show run register output name write output to file template src output txt dest files show files rl superfast superfast ansible host txt expected results actual results using module file usr local lib dist packages ansible modules core network ios ios command py establish local connection for user rob exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home rob ansible tmp ansible tmp ios command py exec bin sh c chmod u x home rob ansible tmp ansible tmp home rob ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python home rob ansible tmp ansible tmp ios command py rm rf home rob ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios command py line in main file tmp ansible ansible module ios command py line in main runner add command cmd file tmp ansible ansible modlib zip ansible module utils netcli py line in add command file tmp ansible ansible modlib zip ansible module utils network py line in cli file tmp ansible ansible modlib zip ansible module utils network py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file usr local lib dist packages paramiko client py line in load system host keys self system host keys load filename file usr local lib dist packages paramiko hostkeys py line in load e hostkeyentry from line line lineno file usr local lib dist packages paramiko hostkeys py line in from line raise invalidhostkey line e paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding fatal failed changed false failed true invocation module name ios command module stderr traceback most recent call last n file tmp ansible ansible module ios command py line in n main n file tmp ansible ansible module ios command py line in main n runner add command cmd n file tmp ansible ansible modlib zip ansible module utils netcli py line in add command n file tmp ansible ansible modlib zip ansible module utils network py line in cli n file tmp ansible ansible modlib zip ansible module utils network py line in connect n file tmp ansible ansible modlib zip ansible module utils ios py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in open n file usr local lib dist packages paramiko client py line in load system host keys n self system host keys load filename n file usr local lib dist packages paramiko hostkeys py line in load n e hostkeyentry from line line lineno n file usr local lib dist packages paramiko hostkeys py line in from line n raise invalidhostkey line e nparamiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding n module stdout msg module failure using module file usr local lib dist packages ansible modules core network ios ios command py establish local connection for user rob exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpmkesve to home rob ansible tmp ansible tmp ios command py exec bin sh c chmod u x home rob ansible tmp ansible tmp home rob ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python home rob ansible tmp ansible tmp ios command py rm rf home rob ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios command py line in main file tmp ansible ansible module ios command py line in main runner add command cmd file tmp ansible ansible modlib zip ansible module utils netcli py line in add command file tmp ansible ansible modlib zip ansible module utils network py line in cli file tmp ansible ansible modlib zip ansible module utils network py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file usr local lib dist packages paramiko client py line in load system host keys self system host keys load filename file usr local lib dist packages paramiko hostkeys py line in load e hostkeyentry from line line lineno file usr local lib dist packages paramiko hostkeys py line in from line raise invalidhostkey line e paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding fatal failed changed false failed true invocation module name ios command module stderr traceback most recent call last n file tmp ansible ansible module ios command py line in n main n file tmp ansible ansible module ios command py line in main n runner add command cmd n file tmp ansible ansible modlib zip ansible module utils netcli py line in add command n file tmp ansible ansible modlib zip ansible module utils network py line in cli n file tmp ansible ansible modlib zip ansible module utils network py line in connect n file tmp ansible ansible modlib zip ansible module utils ios py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in open n file usr local lib dist packages paramiko client py line in load system host keys n self system host keys load filename n file usr local lib dist packages paramiko hostkeys py line in load n e hostkeyentry from line line lineno n file usr local lib dist packages paramiko hostkeys py line in from line n raise invalidhostkey line e nparamiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding n module stdout msg module failure using module file usr local lib dist packages ansible modules core network ios ios command py establish local connection for user rob exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home rob ansible tmp ansible tmp ios command py exec bin sh c chmod u x home rob ansible tmp ansible tmp home rob ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python home rob ansible tmp ansible tmp ios command py rm rf home rob ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios command py line in main file tmp ansible ansible module ios command py line in main runner add command cmd file tmp ansible ansible modlib zip ansible module utils netcli py line in add command file tmp ansible ansible modlib zip ansible module utils network py line in cli file tmp ansible ansible modlib zip ansible module utils network py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file usr local lib dist packages paramiko client py line in load system host keys self system host keys load filename file usr local lib dist packages paramiko hostkeys py line in load e hostkeyentry from line line lineno file usr local lib dist packages paramiko hostkeys py line in from line raise invalidhostkey line e paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding fatal failed changed false failed true invocation module name ios command module stderr traceback most recent call last n file tmp ansible ansible module ios command py line in n main n file tmp ansible ansible module ios command py line in main n runner add command cmd n file tmp ansible ansible modlib zip ansible module utils netcli py line in add command n file tmp ansible ansible modlib zip ansible module utils network py line in cli n file tmp ansible ansible modlib zip ansible module utils network py line in connect n file tmp ansible ansible modlib zip ansible module utils ios py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in open n file usr local lib dist packages paramiko client py line in load system host keys n self system host keys load filename n file usr local lib dist packages paramiko hostkeys py line in load n e hostkeyentry from line line lineno n file usr local lib dist packages paramiko hostkeys py line in from line n raise invalidhostkey line e nparamiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding n module stdout msg module failure ,1 1810,6576169928.0,IssuesEvent,2017-09-11 18:47:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,add_host hosts doesn't inherit host_vars from a vars plugin,affects_2.1 needs_info waiting_on_maintainer,"##### ISSUE TYPE - Bug Report (apologies for opening this in the wrong repo -- https://github.com/ansible/ansible/issues/17544) ##### COMPONENT NAME add_host ##### ANSIBLE VERSION ansible 2.1.0.0 ##### CONFIGURATION ansible.cfg ``` ini [defaults] vars_plugins = vars_plugins ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When adding hosts with `add_host` anything set via vars_plugins through for instance `host.set_variable` doesn't seem to be added. ##### STEPS TO REPRODUCE playbook.yml ``` yams - hosts: localhost gather_facts: false tasks: - add_host: name=""abc"" foo=""bar"" - debug: var=hostvars['abc'] ``` vars_plugins/test.py ``` python class VarsModule(object): def __init__(self, inventory): """""" constructor """""" self.inventory = inventory self.inventory_basedir = inventory.basedir() def run(self, host, vault_password=None): return {} def get_host_vars(self, host, vault_password=None): host.set_variable('foo', 'should be baz') return {} def get_group_vars(self, group, vault_password=None): return {} ``` ##### EXPECTED RESULTS Debug output should display an overridden ""foo"" variable (I tried adding other empty variables too) ##### ACTUAL RESULTS It's unchanged: ``` json { ""hostvars['abc']"": { ""ansible_check_mode"": false, ""ansible_version"": { ""full"": ""2.1.0.0"", ""major"": 2, ""minor"": 1, ""revision"": 0, ""string"": ""2.1.0.0"" }, ""foo"": ""bar"", ""group_names"": [], ""groups"": { ""all"": [ ""localhost"", ""abc"" ], ""ungrouped"": [ ""localhost"" ] }, ""inventory_dir"": null, ""inventory_file"": null, ""inventory_hostname"": ""abc"", ""inventory_hostname_short"": ""abc"", ""omit"": ""__omit_place_holder__62074bbe1679ad30f57196d92a71186467341ed9"", ""playbook_dir"": ""/Users/jbergstroem/Work/ansible-plugin-foo"" } } ``` ",True,"add_host hosts doesn't inherit host_vars from a vars plugin - ##### ISSUE TYPE - Bug Report (apologies for opening this in the wrong repo -- https://github.com/ansible/ansible/issues/17544) ##### COMPONENT NAME add_host ##### ANSIBLE VERSION ansible 2.1.0.0 ##### CONFIGURATION ansible.cfg ``` ini [defaults] vars_plugins = vars_plugins ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When adding hosts with `add_host` anything set via vars_plugins through for instance `host.set_variable` doesn't seem to be added. ##### STEPS TO REPRODUCE playbook.yml ``` yams - hosts: localhost gather_facts: false tasks: - add_host: name=""abc"" foo=""bar"" - debug: var=hostvars['abc'] ``` vars_plugins/test.py ``` python class VarsModule(object): def __init__(self, inventory): """""" constructor """""" self.inventory = inventory self.inventory_basedir = inventory.basedir() def run(self, host, vault_password=None): return {} def get_host_vars(self, host, vault_password=None): host.set_variable('foo', 'should be baz') return {} def get_group_vars(self, group, vault_password=None): return {} ``` ##### EXPECTED RESULTS Debug output should display an overridden ""foo"" variable (I tried adding other empty variables too) ##### ACTUAL RESULTS It's unchanged: ``` json { ""hostvars['abc']"": { ""ansible_check_mode"": false, ""ansible_version"": { ""full"": ""2.1.0.0"", ""major"": 2, ""minor"": 1, ""revision"": 0, ""string"": ""2.1.0.0"" }, ""foo"": ""bar"", ""group_names"": [], ""groups"": { ""all"": [ ""localhost"", ""abc"" ], ""ungrouped"": [ ""localhost"" ] }, ""inventory_dir"": null, ""inventory_file"": null, ""inventory_hostname"": ""abc"", ""inventory_hostname_short"": ""abc"", ""omit"": ""__omit_place_holder__62074bbe1679ad30f57196d92a71186467341ed9"", ""playbook_dir"": ""/Users/jbergstroem/Work/ansible-plugin-foo"" } } ``` ",1,add host hosts doesn t inherit host vars from a vars plugin issue type bug report apologies for opening this in the wrong repo component name add host ansible version ansible configuration ansible cfg ini vars plugins vars plugins os environment n a summary when adding hosts with add host anything set via vars plugins through for instance host set variable doesn t seem to be added steps to reproduce playbook yml yams hosts localhost gather facts false tasks add host name abc foo bar debug var hostvars vars plugins test py python class varsmodule object def init self inventory constructor self inventory inventory self inventory basedir inventory basedir def run self host vault password none return def get host vars self host vault password none host set variable foo should be baz return def get group vars self group vault password none return expected results debug output should display an overridden foo variable i tried adding other empty variables too actual results it s unchanged json hostvars ansible check mode false ansible version full major minor revision string foo bar group names groups all localhost abc ungrouped localhost inventory dir null inventory file null inventory hostname abc inventory hostname short abc omit omit place holder playbook dir users jbergstroem work ansible plugin foo ,1 1865,6577487193.0,IssuesEvent,2017-09-12 01:15:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,IAM Role getting back truncated results,affects_1.9 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME iam plugin (from ansible 2.0) ##### ANSIBLE VERSION ``` ansible 1.9.4 configured module search path = ../../ansible/library ``` ##### OS / ENVIRONMENT ##### SUMMARY The first time I create a role in AWS it succeeds without issue. However, on subsequent runs (meaning the role still exists), the call to the module to set the `state: present` results in a failure with the following error message: ``` failed: [localhost] => {""changed"": true, ""failed"": true} msg: BotoServerError: 409 Conflict Sender EntityAlreadyExists Role with name ROLENAME already exists. b0c4d6d6-00d6-11e6-becf-051c60f11c4a FATAL: all hosts have already failed -- aborting ``` After debugging the module, it looks like the call to iam.list_roles() is not returning all of the roles and my ROLENAME happens to be outside the window. This happens when running against accounts with >100 roles. Looking at the boto docs, it doesn't seem to indicate a fixed default. But the module must be hitting one. ##### STEPS TO REPRODUCE 1. Execute against an AWS account with >100 roles. 2. Run a playbook to create a role using the iam module 3. Run the playbook again. 4. See failure. ``` # Create an IAM role with the policy - name: AWS | Create EC2 IAM Role iam: state: present region: ""{{ aws_region }}"" iam_type: role name: ""{{ role_name }}"" ``` ##### EXPECTED RESULTS Role state of ""present"" should be enforced without failing ##### Questions 1. Looking at the code, a list of all of the roles is generated and used to compare to see if the role exists or not. On the surface, this seems inefficient as a call to iam.get_role(_name_) would be direct. 2. Is this done in order to return a list of all the created roles? 3. If so, could this be changed to instead collect the list after the create is made? 4. In either case, it looks like collecting the list when there are >100 will result in an incomplete list anyway. ",True,"IAM Role getting back truncated results - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME iam plugin (from ansible 2.0) ##### ANSIBLE VERSION ``` ansible 1.9.4 configured module search path = ../../ansible/library ``` ##### OS / ENVIRONMENT ##### SUMMARY The first time I create a role in AWS it succeeds without issue. However, on subsequent runs (meaning the role still exists), the call to the module to set the `state: present` results in a failure with the following error message: ``` failed: [localhost] => {""changed"": true, ""failed"": true} msg: BotoServerError: 409 Conflict Sender EntityAlreadyExists Role with name ROLENAME already exists. b0c4d6d6-00d6-11e6-becf-051c60f11c4a FATAL: all hosts have already failed -- aborting ``` After debugging the module, it looks like the call to iam.list_roles() is not returning all of the roles and my ROLENAME happens to be outside the window. This happens when running against accounts with >100 roles. Looking at the boto docs, it doesn't seem to indicate a fixed default. But the module must be hitting one. ##### STEPS TO REPRODUCE 1. Execute against an AWS account with >100 roles. 2. Run a playbook to create a role using the iam module 3. Run the playbook again. 4. See failure. ``` # Create an IAM role with the policy - name: AWS | Create EC2 IAM Role iam: state: present region: ""{{ aws_region }}"" iam_type: role name: ""{{ role_name }}"" ``` ##### EXPECTED RESULTS Role state of ""present"" should be enforced without failing ##### Questions 1. Looking at the code, a list of all of the roles is generated and used to compare to see if the role exists or not. On the surface, this seems inefficient as a call to iam.get_role(_name_) would be direct. 2. Is this done in order to return a list of all the created roles? 3. If so, could this be changed to instead collect the list after the create is made? 4. In either case, it looks like collecting the list when there are >100 will result in an incomplete list anyway. ",1,iam role getting back truncated results issue type bug report component name iam plugin from ansible ansible version ansible configured module search path ansible library os environment os x yosemite summary the first time i create a role in aws it succeeds without issue however on subsequent runs meaning the role still exists the call to the module to set the state present results in a failure with the following error message failed changed true failed true msg botoservererror conflict errorresponse xmlns sender entityalreadyexists role with name rolename already exists becf fatal all hosts have already failed aborting after debugging the module it looks like the call to iam list roles is not returning all of the roles and my rolename happens to be outside the window this happens when running against accounts with roles looking at the boto docs it doesn t seem to indicate a fixed default but the module must be hitting one steps to reproduce execute against an aws account with roles run a playbook to create a role using the iam module run the playbook again see failure create an iam role with the policy name aws create iam role iam state present region aws region iam type role name role name expected results role state of present should be enforced without failing questions looking at the code a list of all of the roles is generated and used to compare to see if the role exists or not on the surface this seems inefficient as a call to iam get role name would be direct is this done in order to return a list of all the created roles if so could this be changed to instead collect the list after the create is made in either case it looks like collecting the list when there are will result in an incomplete list anyway ,1 1447,6287525610.0,IssuesEvent,2017-07-19 15:07:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,os_server module not updating metadata on a running instance,affects_2.2 bug_report cloud openstack waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_server module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (stable-2.2 c5d4134f37) last updated 2016/10/27 16:10:22 (GMT +100) lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/27 16:10:37 (GMT +100) lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/27 16:10:37 (GMT +100) config file = /home/luisg/provision/boxes/test/openstack/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running ansible on Debian. Targeting OpenStack instances with CentOS 7.2 ##### SUMMARY Module `os_server` does not add meta key/value pairs (using option `meta`) to a running OS instance. Using the same options while creating the OS instance in the first place does add the meta key/value pairs. ##### STEPS TO REPRODUCE This is an example playbook (it assumes your openstack env is correctly set and the same playbook has been run before without the `meta` option): ``` --- - hosts: localhost tasks: - name: Create instance os_server: name: instance image: some-image state: present meta: groups: 'some-group' register: instance - debug: var=instance ``` ##### EXPECTED RESULTS The above playbook should return the metadata argument within the debug output (abbreviated here): ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": { ""groups"": ""some-group"" }, }, } } ``` ##### ACTUAL RESULTS In contrast, the following is obtained, where metadata is returned empty: ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": {}, }, } } ``` Note the task notifies it changed, but nothing happens to the metadata, nor to any other result provided by ansible-playbook's output (just did a diff of two consecutive runs). ",True,"os_server module not updating metadata on a running instance - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_server module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (stable-2.2 c5d4134f37) last updated 2016/10/27 16:10:22 (GMT +100) lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/27 16:10:37 (GMT +100) lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/27 16:10:37 (GMT +100) config file = /home/luisg/provision/boxes/test/openstack/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running ansible on Debian. Targeting OpenStack instances with CentOS 7.2 ##### SUMMARY Module `os_server` does not add meta key/value pairs (using option `meta`) to a running OS instance. Using the same options while creating the OS instance in the first place does add the meta key/value pairs. ##### STEPS TO REPRODUCE This is an example playbook (it assumes your openstack env is correctly set and the same playbook has been run before without the `meta` option): ``` --- - hosts: localhost tasks: - name: Create instance os_server: name: instance image: some-image state: present meta: groups: 'some-group' register: instance - debug: var=instance ``` ##### EXPECTED RESULTS The above playbook should return the metadata argument within the debug output (abbreviated here): ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": { ""groups"": ""some-group"" }, }, } } ``` ##### ACTUAL RESULTS In contrast, the following is obtained, where metadata is returned empty: ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": {}, }, } } ``` Note the task notifies it changed, but nothing happens to the metadata, nor to any other result provided by ansible-playbook's output (just did a diff of two consecutive runs). ",1,os server module not updating metadata on a running instance issue type bug report component name os server module ansible version ansible stable last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home luisg provision boxes test openstack ansible cfg configured module search path default w o overrides configuration os environment running ansible on debian targeting openstack instances with centos summary module os server does not add meta key value pairs using option meta to a running os instance using the same options while creating the os instance in the first place does add the meta key value pairs steps to reproduce this is an example playbook it assumes your openstack env is correctly set and the same playbook has been run before without the meta option hosts localhost tasks name create instance os server name instance image some image state present meta groups some group register instance debug var instance expected results the above playbook should return the metadata argument within the debug output abbreviated here task ok instance changed true openstack metadata groups some group actual results in contrast the following is obtained where metadata is returned empty task ok instance changed true openstack metadata note the task notifies it changed but nothing happens to the metadata nor to any other result provided by ansible playbook s output just did a diff of two consecutive runs ,1 752,4351489927.0,IssuesEvent,2016-07-31 22:00:21,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive permission issues with become and http src,bug_report waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION N/A ##### SUMMARY The unarchive module does not appear to honor become permissions when downloading a remote file. It looks like the module tries to download the file to ansible tmp directory as the become_user however that user does not have permissions to write there since they are owned by the user ansible is running as. It only appears to work when become_user is root. Here is an example from my playbook that fails. ``` - name: Extract Zip unarchive: src=http://someserver.com/files/zipfile.zip dest=/opt/exploded copy=no become: yes become_user: user2 ``` ",True,"unarchive permission issues with become and http src - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION N/A ##### SUMMARY The unarchive module does not appear to honor become permissions when downloading a remote file. It looks like the module tries to download the file to ansible tmp directory as the become_user however that user does not have permissions to write there since they are owned by the user ansible is running as. It only appears to work when become_user is root. Here is an example from my playbook that fails. ``` - name: Extract Zip unarchive: src=http://someserver.com/files/zipfile.zip dest=/opt/exploded copy=no become: yes become_user: user2 ``` ",1,unarchive permission issues with become and http src issue type bug report component name unarchive module ansible version n a summary the unarchive module does not appear to honor become permissions when downloading a remote file it looks like the module tries to download the file to ansible tmp directory as the become user however that user does not have permissions to write there since they are owned by the user ansible is running as it only appears to work when become user is root here is an example from my playbook that fails name extract zip unarchive src dest opt exploded copy no become yes become user ,1 966,4707894653.0,IssuesEvent,2016-10-13 21:31:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,No changes happening under vm_extra_config,affects_2.2 bug_report cloud vmware waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME vsphere_guest module ##### ANSIBLE VERSION N/A ##### SUMMARY Hi I've configured the below playbook which works fine, except for one thing....nothing is happening under the vm_extra_config section? Is there something that I have done wrong or is this a bug? --- - hosts: 127.0.0.1 connection: local user: root sudo: false gather_facts: false serial: 1 vars: vcenter_hostname: xxx.xxx.xxx esxhost: xxx.xxx.xxx.xxx datastore: UK-xxxx network: Web Servers vmtemplate: WIN2K12R2-TEMPLATE vmcluster: UK-CLUSTER username: xxxxxxxxx password: xxxxxxxx folder: Labs notes: Created by Ansible tasks: - name: Create VM from template vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ username }}"" password: ""{{ password }}"" guest: ""UK-ANSIBLE-TEST0{{ name }}"" from_template: yes template_src: ""{{ vmtemplate }}"" cluster: ""{{ vmcluster }}"" resource_pool: ""/Resources"" vm_extra_config: notes: ""{{ notes }}"" folder: ""{{ folder }}"" esxi: datacenter: UK hostname: ""{{ esxhost }}"" If I run a playbook to setup a VM from scratch (not using a template), then the vm_extra_config works. It creates the VM in the folder that I specified. What does this mean? The vm_extra_config does not work with templates? I did see this page: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!searchin/ansible-project/vm_extra_config/ansible-project/9Ki7tPqW2i0/vz0swUJ6CAAJ Any ideas?",True,"No changes happening under vm_extra_config - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME vsphere_guest module ##### ANSIBLE VERSION N/A ##### SUMMARY Hi I've configured the below playbook which works fine, except for one thing....nothing is happening under the vm_extra_config section? Is there something that I have done wrong or is this a bug? --- - hosts: 127.0.0.1 connection: local user: root sudo: false gather_facts: false serial: 1 vars: vcenter_hostname: xxx.xxx.xxx esxhost: xxx.xxx.xxx.xxx datastore: UK-xxxx network: Web Servers vmtemplate: WIN2K12R2-TEMPLATE vmcluster: UK-CLUSTER username: xxxxxxxxx password: xxxxxxxx folder: Labs notes: Created by Ansible tasks: - name: Create VM from template vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ username }}"" password: ""{{ password }}"" guest: ""UK-ANSIBLE-TEST0{{ name }}"" from_template: yes template_src: ""{{ vmtemplate }}"" cluster: ""{{ vmcluster }}"" resource_pool: ""/Resources"" vm_extra_config: notes: ""{{ notes }}"" folder: ""{{ folder }}"" esxi: datacenter: UK hostname: ""{{ esxhost }}"" If I run a playbook to setup a VM from scratch (not using a template), then the vm_extra_config works. It creates the VM in the folder that I specified. What does this mean? The vm_extra_config does not work with templates? I did see this page: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!searchin/ansible-project/vm_extra_config/ansible-project/9Ki7tPqW2i0/vz0swUJ6CAAJ Any ideas?",1,no changes happening under vm extra config issue type bug report component name vsphere guest module ansible version n a summary hi i ve configured the below playbook which works fine except for one thing nothing is happening under the vm extra config section is there something that i have done wrong or is this a bug hosts connection local user root sudo false gather facts false serial vars vcenter hostname xxx xxx xxx esxhost xxx xxx xxx xxx datastore uk xxxx network web servers vmtemplate template vmcluster uk cluster username xxxxxxxxx password xxxxxxxx folder labs notes created by ansible tasks name create vm from template vsphere guest vcenter hostname vcenter hostname username username password password guest uk ansible name from template yes template src vmtemplate cluster vmcluster resource pool resources vm extra config notes notes folder folder esxi datacenter uk hostname esxhost if i run a playbook to setup a vm from scratch not using a template then the vm extra config works it creates the vm in the folder that i specified what does this mean the vm extra config does not work with templates i did see this page any ideas ,1 1696,6574217671.0,IssuesEvent,2017-09-11 12:00:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,nxos_command fails with CLI Error when using the src option,affects_2.3 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_config ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION Using defaults ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### SUMMARY Sending configuration to Nexus 9K fails when using the 'src' option and NXAPI as transport. The same configuration works fine when using CLI as the transport. The command 'feature nxapi' has already been turned on manually on the target device. ##### STEPS TO REPRODUCE ``` [admin@localhost testing]$ cat nxos_config_test.yml --- - hosts: evpn_leaf vars: nxapi: host: ""{{ inventory_hostname }}"" username: admin password: cisco transport: nxapi tasks: - name: Send configuration commands from file to switch nxos_config: provider: ""{{ nxapi }}"" src: config2.txt register: result Contents of config2.txt: hostname EVPN-SPINE1 ! feature ospf feature pim feature lldp feature bgp feature nv overlay nv overlay evpn ! interface loopback0 ip address 10.100.100.1/32 ip router ospf 1 area 0.0.0.0 ip pim sparse-mode ! router ospf 1 router-id 10.100.100.1 area 0.0.0.0 authentication message-digest log-adjacency-changes auto-cost reference-bandwidth 1000 Gbps ! ip pim rp-address 10.100.100.254 group-list 224.0.0.0/4 ip pim ssm range 232.0.0.0/8 ! router bgp 65000 router-id 10.100.100.1 address-family ipv4 unicast address-family l2vpn evpn retain route-target all template peer vtep-peer remote-as 65000 update-source loopback0 address-family ipv4 unicast send-community both route-reflector-client address-family l2vpn evpn send-community both route-reflector-client ``` ##### EXPECTED RESULTS Configuration should have been applied to the target device. ##### ACTUAL RESULTS No changes are made to the target device. A ""CLI execution error' is reported. ``` [admin@localhost testing]$ ansible-playbook nxos_config_test.yml -vvvv Using /home/admin/Ansible/testing/ansible.cfg as config file Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: nxos_config_test.yml ************************************************************* 1 plays in nxos_config_test.yml PLAY [evpn_leaf] *************************************************************************** TASK [setup] ******************************************************************************* Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `"" && echo ansible-tmp-1478822435.86-88529598187044=""` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `"" ) && sleep 0' <10.255.138.13> PUT /tmp/tmp3r9fGa TO /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/ /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py; rm -rf ""/home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/"" > /dev/null 2>&1 && sleep 0' ok: [10.255.138.13] TASK [Send configuration commands from file to switch] ************************************* task path: /home/admin/Ansible/testing/nxos_config_test.yml:12 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/network/nxos/nxos_config.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `"" && echo ansible-tmp-1478822436.45-187297994686173=""` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `"" ) && sleep 0' <10.255.138.13> PUT /tmp/tmpsGsDIk TO /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/ /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py; rm -rf ""/home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/"" > /dev/null 2>&1 && sleep 0' fatal: [10.255.138.13]: FAILED! => { ""changed"": false, ""clierror"": ""% Invalid command\n"", ""code"": ""400"", ""failed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""10.255.138.13"", ""lines"": null, ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""10.255.138.13"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""nxapi"", ""username"": ""admin"" }, ""replace"": ""line"", ""save"": false, ""src"": ""hostname EVPN-SPINE1\n!\nfeature ospf\nfeature pim\nfeature lldp\nfeature bgp\nfeature nv overlay\nnv overlay evpn\n!\ninterface loopback0\n ip address 10.100.100.1/32\n ip router ospf 1 area 0.0.0.0\n ip pim sparse-mode\n!\nrouter ospf 1\n router-id 10.100.100.1\n area 0.0.0.0 authentication message-digest\n log-adjacency-changes\n auto-cost reference-bandwidth 1000 Gbps\n!\nip pim rp-address 10.100.100.254 group-list 224.0.0.0/4\nip pim ssm range 232.0.0.0/8\n!\nrouter bgp 65000\n router-id 10.100.100.1\n address-family ipv4 unicast\n address-family l2vpn evpn\n retain route-target all\n template peer vtep-peer\n remote-as 65000\n update-source loopback0\n address-family ipv4 unicast\n send-community both\n route-reflector-client\n address-family l2vpn evpn\n send-community both\n route-reflector-client\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""nxapi"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true } }, ""msg"": ""CLI execution error"", ""output"": { ""clierror"": ""% Invalid command\n"", ""code"": ""400"", ""msg"": ""CLI execution error"" }, ""url"": ""http://10.255.138.13:80/ins"" } to retry, use: --limit @/home/admin/Ansible/testing/nxos_config_test.retry PLAY RECAP ********************************************************************************* 10.255.138.13 : ok=1 changed=0 unreachable=0 failed=1 ``` ",True,"nxos_command fails with CLI Error when using the src option - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_config ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION Using defaults ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### SUMMARY Sending configuration to Nexus 9K fails when using the 'src' option and NXAPI as transport. The same configuration works fine when using CLI as the transport. The command 'feature nxapi' has already been turned on manually on the target device. ##### STEPS TO REPRODUCE ``` [admin@localhost testing]$ cat nxos_config_test.yml --- - hosts: evpn_leaf vars: nxapi: host: ""{{ inventory_hostname }}"" username: admin password: cisco transport: nxapi tasks: - name: Send configuration commands from file to switch nxos_config: provider: ""{{ nxapi }}"" src: config2.txt register: result Contents of config2.txt: hostname EVPN-SPINE1 ! feature ospf feature pim feature lldp feature bgp feature nv overlay nv overlay evpn ! interface loopback0 ip address 10.100.100.1/32 ip router ospf 1 area 0.0.0.0 ip pim sparse-mode ! router ospf 1 router-id 10.100.100.1 area 0.0.0.0 authentication message-digest log-adjacency-changes auto-cost reference-bandwidth 1000 Gbps ! ip pim rp-address 10.100.100.254 group-list 224.0.0.0/4 ip pim ssm range 232.0.0.0/8 ! router bgp 65000 router-id 10.100.100.1 address-family ipv4 unicast address-family l2vpn evpn retain route-target all template peer vtep-peer remote-as 65000 update-source loopback0 address-family ipv4 unicast send-community both route-reflector-client address-family l2vpn evpn send-community both route-reflector-client ``` ##### EXPECTED RESULTS Configuration should have been applied to the target device. ##### ACTUAL RESULTS No changes are made to the target device. A ""CLI execution error' is reported. ``` [admin@localhost testing]$ ansible-playbook nxos_config_test.yml -vvvv Using /home/admin/Ansible/testing/ansible.cfg as config file Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: nxos_config_test.yml ************************************************************* 1 plays in nxos_config_test.yml PLAY [evpn_leaf] *************************************************************************** TASK [setup] ******************************************************************************* Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `"" && echo ansible-tmp-1478822435.86-88529598187044=""` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `"" ) && sleep 0' <10.255.138.13> PUT /tmp/tmp3r9fGa TO /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/ /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py; rm -rf ""/home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/"" > /dev/null 2>&1 && sleep 0' ok: [10.255.138.13] TASK [Send configuration commands from file to switch] ************************************* task path: /home/admin/Ansible/testing/nxos_config_test.yml:12 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/network/nxos/nxos_config.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `"" && echo ansible-tmp-1478822436.45-187297994686173=""` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `"" ) && sleep 0' <10.255.138.13> PUT /tmp/tmpsGsDIk TO /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/ /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py; rm -rf ""/home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/"" > /dev/null 2>&1 && sleep 0' fatal: [10.255.138.13]: FAILED! => { ""changed"": false, ""clierror"": ""% Invalid command\n"", ""code"": ""400"", ""failed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""10.255.138.13"", ""lines"": null, ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""10.255.138.13"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""nxapi"", ""username"": ""admin"" }, ""replace"": ""line"", ""save"": false, ""src"": ""hostname EVPN-SPINE1\n!\nfeature ospf\nfeature pim\nfeature lldp\nfeature bgp\nfeature nv overlay\nnv overlay evpn\n!\ninterface loopback0\n ip address 10.100.100.1/32\n ip router ospf 1 area 0.0.0.0\n ip pim sparse-mode\n!\nrouter ospf 1\n router-id 10.100.100.1\n area 0.0.0.0 authentication message-digest\n log-adjacency-changes\n auto-cost reference-bandwidth 1000 Gbps\n!\nip pim rp-address 10.100.100.254 group-list 224.0.0.0/4\nip pim ssm range 232.0.0.0/8\n!\nrouter bgp 65000\n router-id 10.100.100.1\n address-family ipv4 unicast\n address-family l2vpn evpn\n retain route-target all\n template peer vtep-peer\n remote-as 65000\n update-source loopback0\n address-family ipv4 unicast\n send-community both\n route-reflector-client\n address-family l2vpn evpn\n send-community both\n route-reflector-client\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""nxapi"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true } }, ""msg"": ""CLI execution error"", ""output"": { ""clierror"": ""% Invalid command\n"", ""code"": ""400"", ""msg"": ""CLI execution error"" }, ""url"": ""http://10.255.138.13:80/ins"" } to retry, use: --limit @/home/admin/Ansible/testing/nxos_config_test.retry PLAY RECAP ********************************************************************************* 10.255.138.13 : ok=1 changed=0 unreachable=0 failed=1 ``` ",1,nxos command fails with cli error when using the src option issue type bug report component name nxos config ansible version ansible config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables using defaults os environment red hat enterprise linux server release maipo summary sending configuration to nexus fails when using the src option and nxapi as transport the same configuration works fine when using cli as the transport the command feature nxapi has already been turned on manually on the target device steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used cat nxos config test yml hosts evpn leaf vars nxapi host inventory hostname username admin password cisco transport nxapi tasks name send configuration commands from file to switch nxos config provider nxapi src txt register result contents of txt hostname evpn feature ospf feature pim feature lldp feature bgp feature nv overlay nv overlay evpn interface ip address ip router ospf area ip pim sparse mode router ospf router id area authentication message digest log adjacency changes auto cost reference bandwidth gbps ip pim rp address group list ip pim ssm range router bgp router id address family unicast address family evpn retain route target all template peer vtep peer remote as update source address family unicast send community both route reflector client address family evpn send community both route reflector client expected results configuration should have been applied to the target device actual results no changes are made to the target device a cli execution error is reported ansible playbook nxos config test yml vvvv using home admin ansible testing ansible cfg as config file loading callback plugin default of type stdout from usr lib site packages ansible plugins callback init pyc playbook nxos config test yml plays in nxos config test yml play task using module file usr lib site packages ansible modules core system setup py establish local connection for user admin exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home admin ansible tmp ansible tmp setup py exec bin sh c chmod u x home admin ansible tmp ansible tmp home admin ansible tmp ansible tmp setup py sleep exec bin sh c usr bin python home admin ansible tmp ansible tmp setup py rm rf home admin ansible tmp ansible tmp dev null sleep ok task task path home admin ansible testing nxos config test yml using module file usr lib site packages ansible modules core network nxos nxos config py establish local connection for user admin exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpsgsdik to home admin ansible tmp ansible tmp nxos config py exec bin sh c chmod u x home admin ansible tmp ansible tmp home admin ansible tmp ansible tmp nxos config py sleep exec bin sh c usr bin python home admin ansible tmp ansible tmp nxos config py rm rf home admin ansible tmp ansible tmp dev null sleep fatal failed changed false clierror invalid command n code failed true invocation module args after null auth pass null authorize false backup false before null config null defaults false force false host lines null match line parents null password value specified in no log parameter port null provider host password value specified in no log parameter transport nxapi username admin replace line save false src hostname evpn n nfeature ospf nfeature pim nfeature lldp nfeature bgp nfeature nv overlay nnv overlay evpn n ninterface n ip address n ip router ospf area n ip pim sparse mode n nrouter ospf n router id n area authentication message digest n log adjacency changes n auto cost reference bandwidth gbps n nip pim rp address group list nip pim ssm range n nrouter bgp n router id n address family unicast n address family evpn n retain route target all n template peer vtep peer n remote as n update source n address family unicast n send community both n route reflector client n address family evpn n send community both n route reflector client n n ssh keyfile null timeout transport nxapi use ssl false username admin validate certs true msg cli execution error output clierror invalid command n code msg cli execution error url to retry use limit home admin ansible testing nxos config test retry play recap ok changed unreachable failed ,1 1189,5103995126.0,IssuesEvent,2017-01-04 23:15:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ansible docker_container not exposing ports (2.2.0.0),affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/rilindo/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Using the following config: ``` stardust:ansible_docker_demo rilindo$ export ANSIBLE_CONFIG=$(pwd)/ansible.cfg stardust:ansible_docker_demo rilindo$ export ANSIBLE_HOSTS=$(pwd)/hosts stardust:ansible_docker_demo rilindo$ cat ansible.cfg [defaults] roles_path=~/src/ansible_docker_demo/roles stardust:ansible_docker_demo rilindo$ cat hosts 192.168.64.8 ``` ##### OS / ENVIRONMENT - Workstation OS X MacSiera - Server: Ubuntu 14.04 with port 2375 open for docker - Python version: 2.7.10 List of python modules: ``` altgraph (0.10.2) ansible (2.2.0.0) argparse (1.2.1) awscli (1.7.36) bdist-mpkg (0.5.0) bonjour-py (0.3) boto (2.10.0) botocore (1.0.1) cached-property (1.3.0) colorama (0.3.3) docker-compose (1.9.0rc3) docker-py (1.10.6) docker-pycreds (0.2.1) dockerpty (0.4.1) docopt (0.6.2) docutils (0.12) ecdsa (0.13) elasticsearch (1.6.0) enum34 (1.1.6) ``` ##### SUMMARY The docker_container does not appear to be configuring and exposing the ports ##### STEPS TO REPRODUCE Here is the code I am using: ``` --- - hosts: all remote_user: ubuntu vars_files: - secret.yml become: yes become_method: sudo tasks: - name: create custom docker container docker_container: name: mycustomcontainer image: rilindo/myapacheweb:v1 state: present network_mode: bridge exposed_ports: ""80"" published_ports: ""80:80"" ``` ##### EXPECTED RESULTS I expect to see this in docker ps -a ``` root@ubuntu:~# docker run -d -p 80:80 rilindo/myapacheweb:v1 --name mycontainer 30d6c5ceb274bf4e0b3767eac0c3df3ed9b58c480f0d25378c1e66bf2b106767 root@ubuntu:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 30d6c5ceb274 rilindo/myapacheweb:v1 ""/bin/sh -c 'apachect"" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp angry_stonebraker root@ubuntu:~# ``` ##### ACTUAL RESULTS This is what it looks like from my workstation ``` stardust:ansible_docker_demo rilindo$ ansible-playbook create_custom_docker_container.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [192.168.64.8] TASK [create custom docker container] ****************************************** changed: [192.168.64.8] PLAY RECAP ********************************************************************* 192.168.64.8 : ok=3 changed=1 unreachable=0 failed=0 ``` So far, so good. This is what I see on the docker host: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ff35fdf076de rilindo/myapacheweb:v1 ""/bin/sh -c 'apachect"" 57 seconds ago Created mycustomcontainer ``` Note that the ports do not appear to be present under the **PORTS** column",True,"Ansible docker_container not exposing ports (2.2.0.0) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/rilindo/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Using the following config: ``` stardust:ansible_docker_demo rilindo$ export ANSIBLE_CONFIG=$(pwd)/ansible.cfg stardust:ansible_docker_demo rilindo$ export ANSIBLE_HOSTS=$(pwd)/hosts stardust:ansible_docker_demo rilindo$ cat ansible.cfg [defaults] roles_path=~/src/ansible_docker_demo/roles stardust:ansible_docker_demo rilindo$ cat hosts 192.168.64.8 ``` ##### OS / ENVIRONMENT - Workstation OS X MacSiera - Server: Ubuntu 14.04 with port 2375 open for docker - Python version: 2.7.10 List of python modules: ``` altgraph (0.10.2) ansible (2.2.0.0) argparse (1.2.1) awscli (1.7.36) bdist-mpkg (0.5.0) bonjour-py (0.3) boto (2.10.0) botocore (1.0.1) cached-property (1.3.0) colorama (0.3.3) docker-compose (1.9.0rc3) docker-py (1.10.6) docker-pycreds (0.2.1) dockerpty (0.4.1) docopt (0.6.2) docutils (0.12) ecdsa (0.13) elasticsearch (1.6.0) enum34 (1.1.6) ``` ##### SUMMARY The docker_container does not appear to be configuring and exposing the ports ##### STEPS TO REPRODUCE Here is the code I am using: ``` --- - hosts: all remote_user: ubuntu vars_files: - secret.yml become: yes become_method: sudo tasks: - name: create custom docker container docker_container: name: mycustomcontainer image: rilindo/myapacheweb:v1 state: present network_mode: bridge exposed_ports: ""80"" published_ports: ""80:80"" ``` ##### EXPECTED RESULTS I expect to see this in docker ps -a ``` root@ubuntu:~# docker run -d -p 80:80 rilindo/myapacheweb:v1 --name mycontainer 30d6c5ceb274bf4e0b3767eac0c3df3ed9b58c480f0d25378c1e66bf2b106767 root@ubuntu:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 30d6c5ceb274 rilindo/myapacheweb:v1 ""/bin/sh -c 'apachect"" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp angry_stonebraker root@ubuntu:~# ``` ##### ACTUAL RESULTS This is what it looks like from my workstation ``` stardust:ansible_docker_demo rilindo$ ansible-playbook create_custom_docker_container.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [192.168.64.8] TASK [create custom docker container] ****************************************** changed: [192.168.64.8] PLAY RECAP ********************************************************************* 192.168.64.8 : ok=3 changed=1 unreachable=0 failed=0 ``` So far, so good. This is what I see on the docker host: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ff35fdf076de rilindo/myapacheweb:v1 ""/bin/sh -c 'apachect"" 57 seconds ago Created mycustomcontainer ``` Note that the ports do not appear to be present under the **PORTS** column",1,ansible docker container not exposing ports issue type bug report component name docker container ansible version ansible config file users rilindo ansible cfg configured module search path default w o overrides configuration using the following config stardust ansible docker demo rilindo export ansible config pwd ansible cfg stardust ansible docker demo rilindo export ansible hosts pwd hosts stardust ansible docker demo rilindo cat ansible cfg roles path src ansible docker demo roles stardust ansible docker demo rilindo cat hosts os environment workstation os x macsiera server ubuntu with port open for docker python version list of python modules altgraph ansible argparse awscli bdist mpkg bonjour py boto botocore cached property colorama docker compose docker py docker pycreds dockerpty docopt docutils ecdsa elasticsearch summary the docker container does not appear to be configuring and exposing the ports steps to reproduce here is the code i am using hosts all remote user ubuntu vars files secret yml become yes become method sudo tasks name create custom docker container docker container name mycustomcontainer image rilindo myapacheweb state present network mode bridge exposed ports published ports expected results i expect to see this in docker ps a root ubuntu docker run d p rilindo myapacheweb name mycontainer root ubuntu docker ps a container id image command created status ports names rilindo myapacheweb bin sh c apachect seconds ago up seconds tcp angry stonebraker root ubuntu actual results this is what it looks like from my workstation stardust ansible docker demo rilindo ansible playbook create custom docker container yml play task ok task changed play recap ok changed unreachable failed so far so good this is what i see on the docker host container id image command created status ports names rilindo myapacheweb bin sh c apachect seconds ago created mycustomcontainer note that the ports do not appear to be present under the ports column,1 960,4704674950.0,IssuesEvent,2016-10-13 12:21:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_facts: `dir all-filesystems | include Directory`not supported on all devices,affects_2.2 bug_report in progress networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100) lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100) lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT 3750 flash:c3750-advipservicesk9-mz.122-44.SE4.bin"" WS-C3750-24PS-S ##### SUMMARY ogenstad: >`dir all-filesystems | include Directory` this is not a valid command on all ios devices. >If it’s used in the ios_facts module there needs to be some checks to catch those errors. >I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work. > I.e. not use `gather_subset: all` Thanks to @ben-cirrus (from networktocode Slack) for this bug report ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS No backtrace, facts returned ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file PLAYBOOK: get_ios_facts.yml **************************************************** 1 plays in get_ios_facts.yml PLAY [lab,] ******************************************************************** TASK [ios_facts] *************************************************************** task path: /root/napalm-testing/get_ios_facts.yml:14 Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" && echo ansible-tmp-1473154099.74-15933157338277=""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" ) && sleep 0' PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 455, in main() File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 437, in main runner.run() File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 163, in run File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 88, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py"", line 66, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py"", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory ^ % Invalid input detected at '^' marker. NSW-CHQ-SW-LAB# fatal: [labswitch]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_facts"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 455, in \n main()\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 437, in main\n runner.run()\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 163, in run\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 88, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 66, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @get_ios_facts.retry PLAY RECAP ********************************************************************* labswitch : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"ios_facts: `dir all-filesystems | include Directory`not supported on all devices - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100) lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100) lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT 3750 flash:c3750-advipservicesk9-mz.122-44.SE4.bin"" WS-C3750-24PS-S ##### SUMMARY ogenstad: >`dir all-filesystems | include Directory` this is not a valid command on all ios devices. >If it’s used in the ios_facts module there needs to be some checks to catch those errors. >I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work. > I.e. not use `gather_subset: all` Thanks to @ben-cirrus (from networktocode Slack) for this bug report ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS No backtrace, facts returned ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file PLAYBOOK: get_ios_facts.yml **************************************************** 1 plays in get_ios_facts.yml PLAY [lab,] ******************************************************************** TASK [ios_facts] *************************************************************** task path: /root/napalm-testing/get_ios_facts.yml:14 Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" && echo ansible-tmp-1473154099.74-15933157338277=""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" ) && sleep 0' PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 455, in main() File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 437, in main runner.run() File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 163, in run File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 88, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py"", line 66, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py"", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory ^ % Invalid input detected at '^' marker. NSW-CHQ-SW-LAB# fatal: [labswitch]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_facts"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 455, in \n main()\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 437, in main\n runner.run()\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 163, in run\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 88, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 66, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @get_ios_facts.retry PLAY RECAP ********************************************************************* labswitch : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,ios facts dir all filesystems include directory not supported on all devices issue type bug report component name ios facts ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration os environment flash mz bin ws s summary ogenstad dir all filesystems include directory this is not a valid command on all ios devices if it’s used in the ios facts module there needs to be some checks to catch those errors i haven’t tested the ios facts module yet but if you can just disable that check i’m guessing it would work i e not use gather subset all thanks to ben cirrus from networktocode slack for this bug report steps to reproduce hosts hosts any errors fatal true connection local gather facts no vars cli host ip addr username user password password transport cli tasks ios facts provider cli gather subset all labswitch ip addr expected results no backtrace facts returned actual results using etc ansible ansible cfg as config file playbook get ios facts yml plays in get ios facts yml play task task path root napalm testing get ios facts yml using module file root ansible lib ansible modules core network ios ios facts py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpxzhjfd to root ansible tmp ansible tmp ios facts py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp ios facts py sleep exec bin sh c usr bin python root ansible tmp ansible tmp ios facts py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios facts py line in main file tmp ansible ansible module ios facts py line in main runner run file tmp ansible ansible modlib zip ansible module utils netcli py line in run file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands file tmp ansible ansible modlib zip ansible module utils ios py line in run commands file tmp ansible ansible modlib zip ansible module utils shell py line in execute ansible module utils network networkerror matched error in response dir all filesystems include directory invalid input detected at marker nsw chq sw lab fatal failed changed false failed true invocation module name ios facts module stderr traceback most recent call last n file tmp ansible ansible module ios facts py line in n main n file tmp ansible ansible module ios facts py line in main n runner run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands n file tmp ansible ansible modlib zip ansible module utils ios py line in run commands n file tmp ansible ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response dir all filesystems include directory r n r n invalid input detected at marker r n r nnsw chq sw lab n module stdout msg module failure to retry use limit get ios facts retry play recap labswitch ok changed unreachable failed ,1 1657,6574047497.0,IssuesEvent,2017-09-11 11:14:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt-rpm Failed to get /usr/bin/rpm,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt-rpm ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ``` $ cat ansible.cfg [defaults] transport = ssh remote_user = toto remote_port = 2222 host_key_checking = False remote_tmp = /tmp roles_path = ./roles log_path = ./ansible.log ansible_managed = Ansible managed: Don't modify manually modified on %Y-%m-%d %H:%M:%S ``` ##### OS / ENVIRONMENT `Debian stretch/sid` ##### SUMMARY I can’t install a RPM located in roles/files/ with apt-rpm module ##### STEPS TO REPRODUCE ``` $ ll roles/icinga2/files/ total 16K -rw-r--r-- 1 tr4sk tr4sk 4,3K juil. 12 12:21 icinga-rpm-release-6-1.el6.noarch.rpm -rw-r--r-- 1 tr4sk tr4sk 4,3K juil. 12 12:25 icinga-rpm-release-7-1.el7.centos.noarch.rpm - name: Add Icinga2 repo RPM apt_rpm: pkg: ""icinga-rpm-release-{{ ansible_distribution_major_version }}-1.el{{ ansible_distribution_major_version }}.noarch.rpm"" state: present TASK [icinga2 : Add Icinga2 repo RPM] ****************************************** fatal: [srv-sup.fr]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""cannot find /usr/bin/apt-get and/or /usr/bin/rpm""} ``` On the remote host ``` # cat /etc/centos-release CentOS release 6.8 (Final) # whereis rpm rpm: /bin/rpm /etc/rpm /usr/lib/rpm /usr/share/man/man8/rpm.8.gz ``` ##### EXPECTED RESULTS It shoulld install the pkg that I provided. ##### ACTUAL RESULTS ``` TASK [icinga2 : Add Icinga2 repo RPM] ****************************************** task path: /home/tr4sk/script/toto/ansible-role-icinga2/tasks/RedHat_repo.yml:2 Using module file /home/tr4sk/.local/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt_rpm.py ESTABLISH SSH CONNECTION FOR USER: toto SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r srv-sup.fr '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo /tmp/ansible-tmp-1480947022.71-23676108448873 `"" && echo ansible-tmp-1480947022.71-23676108448873=""` echo /tmp/ansible-tmp-1480947022.71-23676108448873 `"" ) && sleep 0'""'""'' PUT /tmp/tmpvWStXj TO /tmp/ansible-tmp-1480947022.71-23676108448873/apt_rpm.py SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r '[srv-sup.fr]' ESTABLISH SSH CONNECTION FOR USER: toto SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r srv-sup.fr '/bin/sh -c '""'""'chmod u+x /tmp/ansible-tmp-1480947022.71-23676108448873/ /tmp/ansible-tmp-1480947022.71-23676108448873/apt_rpm.py && sleep 0'""'""'' ESTABLISH SSH CONNECTION FOR USER: toto SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r -tt srv-sup '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-iqfhuwvxfijpfxtptyamisppsnsfpmcl; /usr/bin/python /tmp/ansible-tmp-1480947022.71-23676108448873/apt_rpm.py; rm -rf ""/tmp/ansible-tmp-1480947022.71-23676108448873/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [srv-sup.fr]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""package"": ""icinga-rpm-release-6-1.el6.noarch.rpm"", ""pkg"": ""icinga-rpm-release-6-1.el6.noarch.rpm"", ""state"": ""present"", ""update_cache"": false }, ""module_name"": ""apt_rpm"" }, ""msg"": ""cannot find /usr/bin/apt-get and/or /usr/bin/rpm"" } ``` ",True,"apt-rpm Failed to get /usr/bin/rpm - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt-rpm ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ``` $ cat ansible.cfg [defaults] transport = ssh remote_user = toto remote_port = 2222 host_key_checking = False remote_tmp = /tmp roles_path = ./roles log_path = ./ansible.log ansible_managed = Ansible managed: Don't modify manually modified on %Y-%m-%d %H:%M:%S ``` ##### OS / ENVIRONMENT `Debian stretch/sid` ##### SUMMARY I can’t install a RPM located in roles/files/ with apt-rpm module ##### STEPS TO REPRODUCE ``` $ ll roles/icinga2/files/ total 16K -rw-r--r-- 1 tr4sk tr4sk 4,3K juil. 12 12:21 icinga-rpm-release-6-1.el6.noarch.rpm -rw-r--r-- 1 tr4sk tr4sk 4,3K juil. 12 12:25 icinga-rpm-release-7-1.el7.centos.noarch.rpm - name: Add Icinga2 repo RPM apt_rpm: pkg: ""icinga-rpm-release-{{ ansible_distribution_major_version }}-1.el{{ ansible_distribution_major_version }}.noarch.rpm"" state: present TASK [icinga2 : Add Icinga2 repo RPM] ****************************************** fatal: [srv-sup.fr]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""cannot find /usr/bin/apt-get and/or /usr/bin/rpm""} ``` On the remote host ``` # cat /etc/centos-release CentOS release 6.8 (Final) # whereis rpm rpm: /bin/rpm /etc/rpm /usr/lib/rpm /usr/share/man/man8/rpm.8.gz ``` ##### EXPECTED RESULTS It shoulld install the pkg that I provided. ##### ACTUAL RESULTS ``` TASK [icinga2 : Add Icinga2 repo RPM] ****************************************** task path: /home/tr4sk/script/toto/ansible-role-icinga2/tasks/RedHat_repo.yml:2 Using module file /home/tr4sk/.local/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt_rpm.py ESTABLISH SSH CONNECTION FOR USER: toto SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r srv-sup.fr '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo /tmp/ansible-tmp-1480947022.71-23676108448873 `"" && echo ansible-tmp-1480947022.71-23676108448873=""` echo /tmp/ansible-tmp-1480947022.71-23676108448873 `"" ) && sleep 0'""'""'' PUT /tmp/tmpvWStXj TO /tmp/ansible-tmp-1480947022.71-23676108448873/apt_rpm.py SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r '[srv-sup.fr]' ESTABLISH SSH CONNECTION FOR USER: toto SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r srv-sup.fr '/bin/sh -c '""'""'chmod u+x /tmp/ansible-tmp-1480947022.71-23676108448873/ /tmp/ansible-tmp-1480947022.71-23676108448873/apt_rpm.py && sleep 0'""'""'' ESTABLISH SSH CONNECTION FOR USER: toto SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=toto -o ConnectTimeout=10 -o ControlPath=/home/tr4sk/.ansible/cp/ansible-ssh-%h-%p-%r -tt srv-sup '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-iqfhuwvxfijpfxtptyamisppsnsfpmcl; /usr/bin/python /tmp/ansible-tmp-1480947022.71-23676108448873/apt_rpm.py; rm -rf ""/tmp/ansible-tmp-1480947022.71-23676108448873/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [srv-sup.fr]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""package"": ""icinga-rpm-release-6-1.el6.noarch.rpm"", ""pkg"": ""icinga-rpm-release-6-1.el6.noarch.rpm"", ""state"": ""present"", ""update_cache"": false }, ""module_name"": ""apt_rpm"" }, ""msg"": ""cannot find /usr/bin/apt-get and/or /usr/bin/rpm"" } ``` ",1,apt rpm failed to get usr bin rpm issue type bug report component name apt rpm ansible version ansible configuration cat ansible cfg transport ssh remote user toto remote port host key checking false remote tmp tmp roles path roles log path ansible log ansible managed ansible managed don t modify manually modified on y m d h m s os environment debian stretch sid summary i can’t install a rpm located in roles files with apt rpm module steps to reproduce ll roles files total rw r r juil icinga rpm release noarch rpm rw r r juil icinga rpm release centos noarch rpm name add repo rpm apt rpm pkg icinga rpm release ansible distribution major version el ansible distribution major version noarch rpm state present task fatal failed changed false failed true msg cannot find usr bin apt get and or usr bin rpm on the remote host cat etc centos release centos release final whereis rpm rpm bin rpm etc rpm usr lib rpm usr share man rpm gz expected results it shoulld install the pkg that i provided actual results task task path home script toto ansible role tasks redhat repo yml using module file home local lib site packages ansible modules core packaging os apt rpm py establish ssh connection for user toto ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user toto o connecttimeout o controlpath home ansible cp ansible ssh h p r srv sup fr bin sh c umask mkdir p echo tmp ansible tmp echo ansible tmp echo tmp ansible tmp sleep put tmp tmpvwstxj to tmp ansible tmp apt rpm py ssh exec sftp b vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user toto o connecttimeout o controlpath home ansible cp ansible ssh h p r establish ssh connection for user toto ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user toto o connecttimeout o controlpath home ansible cp ansible ssh h p r srv sup fr bin sh c chmod u x tmp ansible tmp tmp ansible tmp apt rpm py sleep establish ssh connection for user toto ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user toto o connecttimeout o controlpath home ansible cp ansible ssh h p r tt srv sup bin sh c sudo h s n u root bin sh c echo become success iqfhuwvxfijpfxtptyamisppsnsfpmcl usr bin python tmp ansible tmp apt rpm py rm rf tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args package icinga rpm release noarch rpm pkg icinga rpm release noarch rpm state present update cache false module name apt rpm msg cannot find usr bin apt get and or usr bin rpm ,1 1758,6574984772.0,IssuesEvent,2017-09-11 14:41:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Azure Storage Document incorrect on Account Kind,affects_2.3 azure cloud docs_report waiting_on_maintainer,"ISSUE TYPE ``` Documentation Report ``` COMPONENT NAME http://docs.ansible.com/ansible/azure_rm_storageaccount_module.html https://github.com/ansible/ansible-modules-core/blob/stable-2.2/cloud/azure/azure_rm_storageaccount.py SUMMARY StorageBlob kind is inconsistent. Code indicates it as BlobStorage ",True,"Azure Storage Document incorrect on Account Kind - ISSUE TYPE ``` Documentation Report ``` COMPONENT NAME http://docs.ansible.com/ansible/azure_rm_storageaccount_module.html https://github.com/ansible/ansible-modules-core/blob/stable-2.2/cloud/azure/azure_rm_storageaccount.py SUMMARY StorageBlob kind is inconsistent. Code indicates it as BlobStorage ",1,azure storage document incorrect on account kind issue type documentation report component name summary storageblob kind is inconsistent code indicates it as blobstorage ,1 1709,6574437930.0,IssuesEvent,2017-09-11 12:54:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add support for upgrading packages to package module,affects_2.1 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME package ##### ANSIBLE VERSION 2.1.2.0 ##### OS / ENVIRONMENT N/A ##### SUMMARY As you know, yum can be used as follows to upgrade all installed packages: > yum: name=* state=latest That doesn't work for apt for example. That module requires using an upgrade parameter. If we try to apply the same logic to package module as in yum, we would write something like: > package: name=* state=latest However this will try to install all packages available in the repository. What I suggest is to add a parameter to package module that translates into an 'upgrade' for any of the packaging systems it supports.",True,"Add support for upgrading packages to package module - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME package ##### ANSIBLE VERSION 2.1.2.0 ##### OS / ENVIRONMENT N/A ##### SUMMARY As you know, yum can be used as follows to upgrade all installed packages: > yum: name=* state=latest That doesn't work for apt for example. That module requires using an upgrade parameter. If we try to apply the same logic to package module as in yum, we would write something like: > package: name=* state=latest However this will try to install all packages available in the repository. What I suggest is to add a parameter to package module that translates into an 'upgrade' for any of the packaging systems it supports.",1,add support for upgrading packages to package module issue type feature idea component name package ansible version os environment n a summary as you know yum can be used as follows to upgrade all installed packages yum name state latest that doesn t work for apt for example that module requires using an upgrade parameter if we try to apply the same logic to package module as in yum we would write something like package name state latest however this will try to install all packages available in the repository what i suggest is to add a parameter to package module that translates into an upgrade for any of the packaging systems it supports ,1 974,4716938747.0,IssuesEvent,2016-10-16 10:27:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,reopened,Validation with visudo does not work for lineinfile if ,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ``` CentOS Linux release 7.2.1511 (Core) Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When trying to create or modify a file in _/etc/sudoers.d_ by using the ```lineinfile``` module the validation with visudo fails because a temporary file is not found. http://docs.ansible.com/ansible/lineinfile_module.html ##### STEPS TO REPRODUCE ``` - name: Setup sudoers permissions lineinfile: dest=/etc/sudoers.d/icinga2 create=yes state=present line='icinga ALL=(ALL) NOPASSWD:/usr/bin/find' validate='visudo -cf %s' ``` ##### EXPECTED RESULTS A file created under _/etc/sudoers.d/icinga2_ with the content ```icinga ALL=(ALL) NOPASSWD:/usr/bin/find```which passed validation. ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""cmd"": ""visudo -cf /tmp/tmpSBsM5A"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2 ``` ",True,"Validation with visudo does not work for lineinfile if - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ``` CentOS Linux release 7.2.1511 (Core) Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When trying to create or modify a file in _/etc/sudoers.d_ by using the ```lineinfile``` module the validation with visudo fails because a temporary file is not found. http://docs.ansible.com/ansible/lineinfile_module.html ##### STEPS TO REPRODUCE ``` - name: Setup sudoers permissions lineinfile: dest=/etc/sudoers.d/icinga2 create=yes state=present line='icinga ALL=(ALL) NOPASSWD:/usr/bin/find' validate='visudo -cf %s' ``` ##### EXPECTED RESULTS A file created under _/etc/sudoers.d/icinga2_ with the content ```icinga ALL=(ALL) NOPASSWD:/usr/bin/find```which passed validation. ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""cmd"": ""visudo -cf /tmp/tmpSBsM5A"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2 ``` ",1,validation with visudo does not work for lineinfile if issue type bug report component name lineinfile ansible version ansible configuration n a os environment centos linux release core linux smp thu may utc gnu linux summary when trying to create or modify a file in etc sudoers d by using the lineinfile module the validation with visudo fails because a temporary file is not found steps to reproduce name setup sudoers permissions lineinfile dest etc sudoers d create yes state present line icinga all all nopasswd usr bin find validate visudo cf s expected results a file created under etc sudoers d with the content icinga all all nopasswd usr bin find which passed validation actual results failed changed false cmd visudo cf tmp failed true msg no such file or directory rc ,1 1783,6575840517.0,IssuesEvent,2017-09-11 17:31:56,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Error with ec2_group module - Invalid rule parameter '-',affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /Users/user/git/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] retry_files_enabled = False gathering = smart nocows = 1 roles_path = /etc/ansible/roles [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT `macOS Sierra‎ version 10.12` ##### SUMMARY Trying to generate rules using Jinja2 throws an error about invalid parameter. ##### STEPS TO REPRODUCE createSG.yml ``` --- - hosts: localhost connection: local gather_facts: no vars: aws_profile_name: local_aws_profile aws_vpc_id: vpc-abcd1234 ip_whitelist: - 8.8.8.8/32 - 8.8.4.4/32 tasks: - name: Create security group with IP blocks ec2_group: profile: ""{{ aws_profile_name }}"" region: us-east-1 description: ""Whitelist"" name: sg-whitelist purge_rules: true rules: | {% for host in ip_whitelist %} - proto: tcp from_port: 443 to_port: 443 cidr_ip: {{ host }} {% endfor %} vpc_id: ""{{ aws_vpc_id }}"" state: present ``` ``` ansible-playbook playbooks/createSG.yml ``` ##### EXPECTED RESULTS A security group created with two ingress rules. Any rules not part of the play to be purged. ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [Create security group with IP blocks] ******************* fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Invalid rule parameter '-'""} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"Error with ec2_group module - Invalid rule parameter '-' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /Users/user/git/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] retry_files_enabled = False gathering = smart nocows = 1 roles_path = /etc/ansible/roles [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT `macOS Sierra‎ version 10.12` ##### SUMMARY Trying to generate rules using Jinja2 throws an error about invalid parameter. ##### STEPS TO REPRODUCE createSG.yml ``` --- - hosts: localhost connection: local gather_facts: no vars: aws_profile_name: local_aws_profile aws_vpc_id: vpc-abcd1234 ip_whitelist: - 8.8.8.8/32 - 8.8.4.4/32 tasks: - name: Create security group with IP blocks ec2_group: profile: ""{{ aws_profile_name }}"" region: us-east-1 description: ""Whitelist"" name: sg-whitelist purge_rules: true rules: | {% for host in ip_whitelist %} - proto: tcp from_port: 443 to_port: 443 cidr_ip: {{ host }} {% endfor %} vpc_id: ""{{ aws_vpc_id }}"" state: present ``` ``` ansible-playbook playbooks/createSG.yml ``` ##### EXPECTED RESULTS A security group created with two ingress rules. Any rules not part of the play to be purged. ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [Create security group with IP blocks] ******************* fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Invalid rule parameter '-'""} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,error with group module invalid rule parameter issue type bug report component name group ansible version ansible config file users user git ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables retry files enabled false gathering smart nocows roles path etc ansible roles os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific macos sierra‎ version summary trying to generate rules using throws an error about invalid parameter steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used createsg yml hosts localhost connection local gather facts no vars aws profile name local aws profile aws vpc id vpc ip whitelist tasks name create security group with ip blocks group profile aws profile name region us east description whitelist name sg whitelist purge rules true rules for host in ip whitelist proto tcp from port to port cidr ip host endfor vpc id aws vpc id state present ansible playbook playbooks createsg yml expected results a security group created with two ingress rules any rules not part of the play to be purged actual results play task fatal failed changed false failed true msg invalid rule parameter no more hosts left play recap localhost ok changed unreachable failed ,1 1817,6577318442.0,IssuesEvent,2017-09-12 00:04:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0,affects_2.1 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /users/pikachuexe/projects/spacious/spacious-rails/ansible.cfg configured module search path = ['./ansible/library'] ``` ##### CONFIGURATION ``` [defaults] roles_path = ./ansible/roles hostfile = ./ansible/inventories/localhost filter_plugins = ./ansible/filter_plugins library = ./ansible/library error_on_undefined_vars = True display_skipped_hosts = False ``` ##### OS / ENVIRONMENT ""N/A"" ##### SUMMARY Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0 ##### STEPS TO REPRODUCE I try to update from Ansible 1.8.x And worked around some issues with `2.0.x` (at least still works with some issue unfixed) https://docs.ansible.com/ansible/docker_module.html Declares it works with `docker-py` >= `0.3.0` So I run my usual deploy playbook which fails at the task below ``` --- - name: pull docker image by creating a tmp container sudo: true # This is step that fails # No label specified action: module: docker image: ""{{ docker_image_name }}:{{ docker_image_tag }}"" state: ""present"" pull: ""missing"" # By using name, the existing container will be stopped automatically name: ""{{ docker_container_prefix }}.tmp.pull"" command: ""bash"" detach: yes username: ""{{ docker_api_username | mandatory }}"" password: ""{{ docker_api_password | mandatory }}"" email: ""{{ docker_api_email | mandatory }}"" register: pull_docker_image_result until: pull_docker_image_result|success retries: 5 delay: 3 ``` ##### EXPECTED RESULTS Starts container without issue ##### ACTUAL RESULTS Failed due to docker-py version too old (locked at `1.1.0` for ansible `1.8.x`) And `labels` keyword is passed without checking the version ``` fatal: [app_server_02]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""docker""}, ""module_stderr"": ""OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011\ndebug1: Reading configuration data /users/pikachuexe/.ssh/config\r\ndebug1: /users/pikachuexe/.ssh/config line 1: Applying options for *\r\ndebug1: /users/pikachuexe/.ssh/config line 17: Applying options for 119.81.*.*\r\ndebug1: Reading configuration data /etc/ssh_config\r\ndebug1: /etc/ssh_config line 20: Applying options for *\r\ndebug1: /etc/ssh_config line 102: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 87565\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 119.81.xxx.xxx closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1972, in \r\n main()\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1938, in main\r\n present(manager, containers, count, name)\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1742, in present\r\n created = manager.create_containers(delta)\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1660, in create_containers\r\n containers = do_create(count, params)\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1653, in do_create\r\n result = self.client.create_container(**params)\r\nTypeError: create_container() got an unexpected keyword argument 'labels'\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /users/pikachuexe/projects/spacious/spacious-rails/ansible.cfg configured module search path = ['./ansible/library'] ``` ##### CONFIGURATION ``` [defaults] roles_path = ./ansible/roles hostfile = ./ansible/inventories/localhost filter_plugins = ./ansible/filter_plugins library = ./ansible/library error_on_undefined_vars = True display_skipped_hosts = False ``` ##### OS / ENVIRONMENT ""N/A"" ##### SUMMARY Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0 ##### STEPS TO REPRODUCE I try to update from Ansible 1.8.x And worked around some issues with `2.0.x` (at least still works with some issue unfixed) https://docs.ansible.com/ansible/docker_module.html Declares it works with `docker-py` >= `0.3.0` So I run my usual deploy playbook which fails at the task below ``` --- - name: pull docker image by creating a tmp container sudo: true # This is step that fails # No label specified action: module: docker image: ""{{ docker_image_name }}:{{ docker_image_tag }}"" state: ""present"" pull: ""missing"" # By using name, the existing container will be stopped automatically name: ""{{ docker_container_prefix }}.tmp.pull"" command: ""bash"" detach: yes username: ""{{ docker_api_username | mandatory }}"" password: ""{{ docker_api_password | mandatory }}"" email: ""{{ docker_api_email | mandatory }}"" register: pull_docker_image_result until: pull_docker_image_result|success retries: 5 delay: 3 ``` ##### EXPECTED RESULTS Starts container without issue ##### ACTUAL RESULTS Failed due to docker-py version too old (locked at `1.1.0` for ansible `1.8.x`) And `labels` keyword is passed without checking the version ``` fatal: [app_server_02]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""docker""}, ""module_stderr"": ""OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011\ndebug1: Reading configuration data /users/pikachuexe/.ssh/config\r\ndebug1: /users/pikachuexe/.ssh/config line 1: Applying options for *\r\ndebug1: /users/pikachuexe/.ssh/config line 17: Applying options for 119.81.*.*\r\ndebug1: Reading configuration data /etc/ssh_config\r\ndebug1: /etc/ssh_config line 20: Applying options for *\r\ndebug1: /etc/ssh_config line 102: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 87565\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 119.81.xxx.xxx closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1972, in \r\n main()\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1938, in main\r\n present(manager, containers, count, name)\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1742, in present\r\n created = manager.create_containers(delta)\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1660, in create_containers\r\n containers = do_create(count, params)\r\n File \""/tmp/ansible_q5yEWt/ansible_module_docker.py\"", line 1653, in do_create\r\n result = self.client.create_container(**params)\r\nTypeError: create_container() got an unexpected keyword argument 'labels'\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,ansible x cloud docker incompatible with docker py issue type bug report component name docker module ansible version ansible config file users pikachuexe projects spacious spacious rails ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables roles path ansible roles hostfile ansible inventories localhost filter plugins ansible filter plugins library ansible library error on undefined vars true display skipped hosts false os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary ansible x cloud docker incompatible with docker py steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i try to update from ansible x and worked around some issues with x at least still works with some issue unfixed declares it works with docker py so i run my usual deploy playbook which fails at the task below name pull docker image by creating a tmp container sudo true this is step that fails no label specified action module docker image docker image name docker image tag state present pull missing by using name the existing container will be stopped automatically name docker container prefix tmp pull command bash detach yes username docker api username mandatory password docker api password mandatory email docker api email mandatory register pull docker image result until pull docker image result success retries delay expected results starts container without issue actual results failed due to docker py version too old locked at for ansible x and labels keyword is passed without checking the version fatal failed changed false failed true invocation module name docker module stderr openssh osslshim dec reading configuration data users pikachuexe ssh config r users pikachuexe ssh config line applying options for r users pikachuexe ssh config line applying options for r reading configuration data etc ssh config r etc ssh config line applying options for r etc ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to xxx xxx closed r n module stdout traceback most recent call last r n file tmp ansible ansible module docker py line in r n main r n file tmp ansible ansible module docker py line in main r n present manager containers count name r n file tmp ansible ansible module docker py line in present r n created manager create containers delta r n file tmp ansible ansible module docker py line in create containers r n containers do create count params r n file tmp ansible ansible module docker py line in do create r n result self client create container params r ntypeerror create container got an unexpected keyword argument labels r n msg module failure parsed false ,1 805,4425294306.0,IssuesEvent,2016-08-16 15:04:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Cannot attach pool without unregistering with redhat_subscribe,bug_report P3 waiting_on_maintainer,"##### Issue Type: Bug Report ##### Component Name: redhat_subscription module ##### Ansible Version: 1.7 ##### Environment: RHEL 6 ##### Summary: Cannot attach a subscription pool if a system is already registered ##### Steps To Reproduce: Register a RHEL system. Run `ansible rh-host -m redhat_subscription -a 'username=user password=password pool=""Red Hat Enterprise Linux Server""'` to attach a pool ##### Expected Results: I expected a pool to be attached to the system similar to if I were to run `subscription-manager attach --pool=XXXXXXXXX` from the command line ##### Actual Results: When trying to attach a pool to a system that is already registered I get the following output ``` rh-host | success >> { ""changed"": false, ""msg"": ""System already registered."" } ``` To make this work I have to unsubscribe the system first before re-subscribing and attaching a subscription in the same step. I should be able to attach a subscription at any time if the system is subscribed or being registered.",True,"Cannot attach pool without unregistering with redhat_subscribe - ##### Issue Type: Bug Report ##### Component Name: redhat_subscription module ##### Ansible Version: 1.7 ##### Environment: RHEL 6 ##### Summary: Cannot attach a subscription pool if a system is already registered ##### Steps To Reproduce: Register a RHEL system. Run `ansible rh-host -m redhat_subscription -a 'username=user password=password pool=""Red Hat Enterprise Linux Server""'` to attach a pool ##### Expected Results: I expected a pool to be attached to the system similar to if I were to run `subscription-manager attach --pool=XXXXXXXXX` from the command line ##### Actual Results: When trying to attach a pool to a system that is already registered I get the following output ``` rh-host | success >> { ""changed"": false, ""msg"": ""System already registered."" } ``` To make this work I have to unsubscribe the system first before re-subscribing and attaching a subscription in the same step. I should be able to attach a subscription at any time if the system is subscribed or being registered.",1,cannot attach pool without unregistering with redhat subscribe issue type bug report component name redhat subscription module ansible version environment rhel summary cannot attach a subscription pool if a system is already registered steps to reproduce register a rhel system run ansible rh host m redhat subscription a username user password password pool red hat enterprise linux server to attach a pool expected results i expected a pool to be attached to the system similar to if i were to run subscription manager attach pool xxxxxxxxx from the command line actual results when trying to attach a pool to a system that is already registered i get the following output rh host success changed false msg system already registered to make this work i have to unsubscribe the system first before re subscribing and attaching a subscription in the same step i should be able to attach a subscription at any time if the system is subscribed or being registered ,1 1660,6574048051.0,IssuesEvent,2017-09-11 11:14:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure_rm_virtualmachine fails to find resource group upon vm creation,affects_2.2 azure bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_virtualmachine ##### ANSIBLE VERSION ansible 2.2.0.0 also tried with github pip install at this time ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ``` pip freeze | grep azure azure==2.0.0rc5 azure-batch==0.30.0rc5 azure-cli==0.1.0b10 azure-cli-acr==0.1.0b10 azure-cli-acs==0.1.0b10 azure-cli-appservice==0.1.0b10 azure-cli-cloud==0.1.0b10 azure-cli-component==0.1.0b10 azure-cli-configure==0.1.0b10 azure-cli-container==0.1.0b10 azure-cli-context==0.1.0b10 azure-cli-core==0.1.0b10 azure-cli-feedback==0.1.0b10 azure-cli-network==0.1.0b10 azure-cli-profile==0.1.0b10 azure-cli-resource==0.1.0b10 azure-cli-role==0.1.0b10 azure-cli-storage==0.1.0b10 azure-cli-vm==0.1.0b10 azure-common==1.1.4 azure-graphrbac==0.30.0rc6 azure-mgmt==0.30.0rc5 azure-mgmt-authorization==0.30.0rc6 azure-mgmt-batch==0.30.0rc5 azure-mgmt-cdn==0.30.0rc5 azure-mgmt-cognitiveservices==0.30.0rc5 azure-mgmt-commerce==0.30.0rc5 azure-mgmt-compute==0.32.1 azure-mgmt-containerregistry==0.1.0 azure-mgmt-dns==0.30.0rc6 azure-mgmt-keyvault==0.30.0rc5 azure-mgmt-logic==0.30.0rc5 azure-mgmt-network==0.30.0rc6 azure-mgmt-notificationhubs==0.30.0rc5 azure-mgmt-nspkg==1.0.0 azure-mgmt-powerbiembedded==0.30.0rc5 azure-mgmt-redis==0.30.0rc5 azure-mgmt-resource==0.30.2 azure-mgmt-scheduler==0.30.0rc5 azure-mgmt-storage==0.30.0rc6 azure-mgmt-trafficmanager==0.30.0rc6 azure-mgmt-web==0.30.1 azure-nspkg==1.0.0 azure-servicebus==0.20.2 azure-servicemanagement-legacy==0.20.3 azure-storage==0.33.0 msrestazure==0.4.5 ``` ##### SUMMARY Vm creation is not successful due to error about not finding an existing Resource Group that is listed by azure cli fatal: [development-tools]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Parameter error: resource group Development not found""} ##### STEPS TO REPRODUCE ``` - name: Create a VM azure_rm_virtualmachine: resource_group: Development name: testvm image: offer: UbuntuServer publisher: Canonical sku: '14.04.4-LTS' ``` ##### EXPECTED RESULTS Expected vm to be created ##### ACTUAL RESULTS ``` fatal: [development-tools]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""ad_user"": null, ""admin_password"": null, ""admin_username"": ""admin"", ""allocated"": true, ""append_tags"": true, ""client_id"": null, ""image"": { ""offer"": ""UbuntuServer"", ""publisher"": ""Canonical"", ""sku"": ""14.04.4-LTS"" }, ""location"": null, ""name"": ""testvm"", ""network_interface_names"": null, ""open_ports"": null, ""os_disk_caching"": ""ReadOnly"", ""os_type"": ""Linux"", ""password"": null, ""profile"": null, ""public_ip_allocation_method"": ""Static"", ""remove_on_absent"": [ ""all"" ], ""resource_group"": ""Development"", ""restarted"": false, ""secret"": null, ""short_hostname"": null, ""ssh_password_enabled"": true, ""ssh_public_keys"": null, ""started"": true, ""state"": ""present"", ""storage_account_name"": null, ""storage_blob_name"": null, ""storage_container_name"": ""vhds"", ""subnet_name"": null, ""subscription_id"": null, ""tags"": null, ""tenant"": null, ""virtual_network_name"": null, ""vm_size"": ""Standard_D1"" }, ""module_name"": ""azure_rm_virtualmachine"" }, ""msg"": ""Parameter error: resource group Development not found"" } ``` ``` ",True,"azure_rm_virtualmachine fails to find resource group upon vm creation - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_virtualmachine ##### ANSIBLE VERSION ansible 2.2.0.0 also tried with github pip install at this time ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ``` pip freeze | grep azure azure==2.0.0rc5 azure-batch==0.30.0rc5 azure-cli==0.1.0b10 azure-cli-acr==0.1.0b10 azure-cli-acs==0.1.0b10 azure-cli-appservice==0.1.0b10 azure-cli-cloud==0.1.0b10 azure-cli-component==0.1.0b10 azure-cli-configure==0.1.0b10 azure-cli-container==0.1.0b10 azure-cli-context==0.1.0b10 azure-cli-core==0.1.0b10 azure-cli-feedback==0.1.0b10 azure-cli-network==0.1.0b10 azure-cli-profile==0.1.0b10 azure-cli-resource==0.1.0b10 azure-cli-role==0.1.0b10 azure-cli-storage==0.1.0b10 azure-cli-vm==0.1.0b10 azure-common==1.1.4 azure-graphrbac==0.30.0rc6 azure-mgmt==0.30.0rc5 azure-mgmt-authorization==0.30.0rc6 azure-mgmt-batch==0.30.0rc5 azure-mgmt-cdn==0.30.0rc5 azure-mgmt-cognitiveservices==0.30.0rc5 azure-mgmt-commerce==0.30.0rc5 azure-mgmt-compute==0.32.1 azure-mgmt-containerregistry==0.1.0 azure-mgmt-dns==0.30.0rc6 azure-mgmt-keyvault==0.30.0rc5 azure-mgmt-logic==0.30.0rc5 azure-mgmt-network==0.30.0rc6 azure-mgmt-notificationhubs==0.30.0rc5 azure-mgmt-nspkg==1.0.0 azure-mgmt-powerbiembedded==0.30.0rc5 azure-mgmt-redis==0.30.0rc5 azure-mgmt-resource==0.30.2 azure-mgmt-scheduler==0.30.0rc5 azure-mgmt-storage==0.30.0rc6 azure-mgmt-trafficmanager==0.30.0rc6 azure-mgmt-web==0.30.1 azure-nspkg==1.0.0 azure-servicebus==0.20.2 azure-servicemanagement-legacy==0.20.3 azure-storage==0.33.0 msrestazure==0.4.5 ``` ##### SUMMARY Vm creation is not successful due to error about not finding an existing Resource Group that is listed by azure cli fatal: [development-tools]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Parameter error: resource group Development not found""} ##### STEPS TO REPRODUCE ``` - name: Create a VM azure_rm_virtualmachine: resource_group: Development name: testvm image: offer: UbuntuServer publisher: Canonical sku: '14.04.4-LTS' ``` ##### EXPECTED RESULTS Expected vm to be created ##### ACTUAL RESULTS ``` fatal: [development-tools]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""ad_user"": null, ""admin_password"": null, ""admin_username"": ""admin"", ""allocated"": true, ""append_tags"": true, ""client_id"": null, ""image"": { ""offer"": ""UbuntuServer"", ""publisher"": ""Canonical"", ""sku"": ""14.04.4-LTS"" }, ""location"": null, ""name"": ""testvm"", ""network_interface_names"": null, ""open_ports"": null, ""os_disk_caching"": ""ReadOnly"", ""os_type"": ""Linux"", ""password"": null, ""profile"": null, ""public_ip_allocation_method"": ""Static"", ""remove_on_absent"": [ ""all"" ], ""resource_group"": ""Development"", ""restarted"": false, ""secret"": null, ""short_hostname"": null, ""ssh_password_enabled"": true, ""ssh_public_keys"": null, ""started"": true, ""state"": ""present"", ""storage_account_name"": null, ""storage_blob_name"": null, ""storage_container_name"": ""vhds"", ""subnet_name"": null, ""subscription_id"": null, ""tags"": null, ""tenant"": null, ""virtual_network_name"": null, ""vm_size"": ""Standard_D1"" }, ""module_name"": ""azure_rm_virtualmachine"" }, ""msg"": ""Parameter error: resource group Development not found"" } ``` ``` ",1,azure rm virtualmachine fails to find resource group upon vm creation issue type bug report component name azure rm virtualmachine ansible version ansible also tried with github pip install at this time configuration os environment linux pip freeze grep azure azure azure batch azure cli azure cli acr azure cli acs azure cli appservice azure cli cloud azure cli component azure cli configure azure cli container azure cli context azure cli core azure cli feedback azure cli network azure cli profile azure cli resource azure cli role azure cli storage azure cli vm azure common azure graphrbac azure mgmt azure mgmt authorization azure mgmt batch azure mgmt cdn azure mgmt cognitiveservices azure mgmt commerce azure mgmt compute azure mgmt containerregistry azure mgmt dns azure mgmt keyvault azure mgmt logic azure mgmt network azure mgmt notificationhubs azure mgmt nspkg azure mgmt powerbiembedded azure mgmt redis azure mgmt resource azure mgmt scheduler azure mgmt storage azure mgmt trafficmanager azure mgmt web azure nspkg azure servicebus azure servicemanagement legacy azure storage msrestazure summary vm creation is not successful due to error about not finding an existing resource group that is listed by azure cli fatal failed changed false failed true msg parameter error resource group development not found steps to reproduce name create a vm azure rm virtualmachine resource group development name testvm image offer ubuntuserver publisher canonical sku lts expected results expected vm to be created actual results fatal failed changed false failed true invocation module args ad user null admin password null admin username admin allocated true append tags true client id null image offer ubuntuserver publisher canonical sku lts location null name testvm network interface names null open ports null os disk caching readonly os type linux password null profile null public ip allocation method static remove on absent all resource group development restarted false secret null short hostname null ssh password enabled true ssh public keys null started true state present storage account name null storage blob name null storage container name vhds subnet name null subscription id null tags null tenant null virtual network name null vm size standard module name azure rm virtualmachine msg parameter error resource group development not found ,1 1178,5096335992.0,IssuesEvent,2017-01-03 17:52:22,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,service module: ubuntu xenial on travis: failure 1 running systemctl show for 'apache2': Failed to connect to bus: No such file or directory,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No extra ##### OS / ENVIRONMENT Docker in Travis, mapping ubuntu xenial in local connection ##### SUMMARY when using service task to check one is started, task is failed with above message ##### STEPS TO REPRODUCE See https://travis-ci.org/juju4/ansible-icinga2/jobs/150094107 https://travis-ci.org/juju4/ansible-bro-ids/jobs/150093516 https://travis-ci.org/juju4/ansible-mhn/jobs/150159300 (only on target xenial, working on trusty) On a local Vagrantfile with xenial (as found in github/#role#/test/vagrant): it's working normally... So there is something with travis environment ##### EXPECTED RESULTS test/vagrant local execution on xenial ``` TASK [bro-ids : check that mysql is running] *********************************** task path: /home/julien/Documents/script/homelab/roles/bro-ids/tasks/main.yml:80 <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile=""/home/julien/Documents/script/homelab/roles/bro-ids/test/vagrant/.vagrant/machines/broids/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/home/julien/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936 `"" && echo ansible-tmp-1470432978.69-241202336337936=""` echo $HOME/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /tmp/tmp2IUx42 TO /home/vagrant/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936/service <127.0.0.1> SSH: EXEC sftp -b - -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile=""/home/julien/Documents/script/homelab/roles/bro-ids/test/vagrant/.vagrant/machines/broids/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/home/julien/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile=""/home/julien/Documents/script/homelab/roles/bro-ids/test/vagrant/.vagrant/machines/broids/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/home/julien/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-drceqxkhrlhnomixtamaaklasdzascol; LANG=fr_FR.UTF-8 LC_ALL=fr_FR.UTF-8 LC_MESSAGES=fr_FR.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936/service; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ok: [broids] => {""changed"": false, ""invocation"": {""module_args"": {""arguments"": """", ""enabled"": null, ""name"": ""mysql"", ""pattern"": null, ""runlevel"": ""default"", ""sleep"": null, ""state"": ""started""}, ""module_name"": ""service""}, ""name"": ""mysql"", ""state"": ""started""} ``` ##### ACTUAL RESULTS https://travis-ci.org/juju4/ansible-bro-ids/jobs/150166292 ``` TASK [bro-ids : check that mysql is running] *********************************** task path: /etc/ansible/roles/bro-ids/tasks/main.yml:80 ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016 `"" && echo ansible-tmp-1470433523.11-95534498356016=""` echo $HOME/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016 `"" ) && sleep 0' PUT /tmp/tmpMC1xcR TO /root/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016/service EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016/service; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""arguments"": """", ""enabled"": null, ""name"": ""mysql"", ""pattern"": null, ""runlevel"": ""default"", ""sleep"": null, ""state"": ""started""}, ""module_name"": ""service""}, ""msg"": ""failure 1 running systemctl show for 'mysql': Failed to connect to bus: No such file or directory\n""} ``` ",True,"service module: ubuntu xenial on travis: failure 1 running systemctl show for 'apache2': Failed to connect to bus: No such file or directory - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No extra ##### OS / ENVIRONMENT Docker in Travis, mapping ubuntu xenial in local connection ##### SUMMARY when using service task to check one is started, task is failed with above message ##### STEPS TO REPRODUCE See https://travis-ci.org/juju4/ansible-icinga2/jobs/150094107 https://travis-ci.org/juju4/ansible-bro-ids/jobs/150093516 https://travis-ci.org/juju4/ansible-mhn/jobs/150159300 (only on target xenial, working on trusty) On a local Vagrantfile with xenial (as found in github/#role#/test/vagrant): it's working normally... So there is something with travis environment ##### EXPECTED RESULTS test/vagrant local execution on xenial ``` TASK [bro-ids : check that mysql is running] *********************************** task path: /home/julien/Documents/script/homelab/roles/bro-ids/tasks/main.yml:80 <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile=""/home/julien/Documents/script/homelab/roles/bro-ids/test/vagrant/.vagrant/machines/broids/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/home/julien/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936 `"" && echo ansible-tmp-1470432978.69-241202336337936=""` echo $HOME/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /tmp/tmp2IUx42 TO /home/vagrant/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936/service <127.0.0.1> SSH: EXEC sftp -b - -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile=""/home/julien/Documents/script/homelab/roles/bro-ids/test/vagrant/.vagrant/machines/broids/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/home/julien/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2202 -o 'IdentityFile=""/home/julien/Documents/script/homelab/roles/bro-ids/test/vagrant/.vagrant/machines/broids/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/home/julien/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-drceqxkhrlhnomixtamaaklasdzascol; LANG=fr_FR.UTF-8 LC_ALL=fr_FR.UTF-8 LC_MESSAGES=fr_FR.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936/service; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1470432978.69-241202336337936/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ok: [broids] => {""changed"": false, ""invocation"": {""module_args"": {""arguments"": """", ""enabled"": null, ""name"": ""mysql"", ""pattern"": null, ""runlevel"": ""default"", ""sleep"": null, ""state"": ""started""}, ""module_name"": ""service""}, ""name"": ""mysql"", ""state"": ""started""} ``` ##### ACTUAL RESULTS https://travis-ci.org/juju4/ansible-bro-ids/jobs/150166292 ``` TASK [bro-ids : check that mysql is running] *********************************** task path: /etc/ansible/roles/bro-ids/tasks/main.yml:80 ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016 `"" && echo ansible-tmp-1470433523.11-95534498356016=""` echo $HOME/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016 `"" ) && sleep 0' PUT /tmp/tmpMC1xcR TO /root/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016/service EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016/service; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470433523.11-95534498356016/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""arguments"": """", ""enabled"": null, ""name"": ""mysql"", ""pattern"": null, ""runlevel"": ""default"", ""sleep"": null, ""state"": ""started""}, ""module_name"": ""service""}, ""msg"": ""failure 1 running systemctl show for 'mysql': Failed to connect to bus: No such file or directory\n""} ``` ",1,service module ubuntu xenial on travis failure running systemctl show for failed to connect to bus no such file or directory issue type bug report component name service module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration no extra os environment docker in travis mapping ubuntu xenial in local connection summary when using service task to check one is started task is failed with above message steps to reproduce see only on target xenial working on trusty on a local vagrantfile with xenial as found in github role test vagrant it s working normally so there is something with travis environment expected results test vagrant local execution on xenial task task path home julien documents script homelab roles bro ids tasks main yml establish ssh connection for user vagrant ssh exec ssh c vvv o userknownhostsfile dev null o identitiesonly yes o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home julien documents script homelab roles bro ids test vagrant vagrant machines broids virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home julien ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp service ssh exec sftp b c vvv o userknownhostsfile dev null o identitiesonly yes o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home julien documents script homelab roles bro ids test vagrant vagrant machines broids virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home julien ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c vvv o userknownhostsfile dev null o identitiesonly yes o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home julien documents script homelab roles bro ids test vagrant vagrant machines broids virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home julien ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success drceqxkhrlhnomixtamaaklasdzascol lang fr fr utf lc all fr fr utf lc messages fr fr utf usr bin python home vagrant ansible tmp ansible tmp service rm rf home vagrant ansible tmp ansible tmp dev null sleep ok changed false invocation module args arguments enabled null name mysql pattern null runlevel default sleep null state started module name service name mysql state started actual results task task path etc ansible roles bro ids tasks main yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp service exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp service rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args arguments enabled null name mysql pattern null runlevel default sleep null state started module name service msg failure running systemctl show for mysql failed to connect to bus no such file or directory n ,1 1256,5330147468.0,IssuesEvent,2017-02-15 16:25:44,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Error with module apt autoremove,affects_2.1 bug_report P2 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt autoremove ##### ANSIBLE VERSION ``` bash ansible 2.1.0.0 config file = /home/galindro/git_repos/devops/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` bash $ cat ansible.cfg [defaults] host_key_checking = False [ssh_connection] pipelining=False ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=""~/.ansible/tmp/ansible-ssh-%h-%p-%r"" ``` I'm NOT using any ANSIBLE_\* environment variable ##### OS / ENVIRONMENT ``` No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.5 (jessie) Release: 8.5 Codename: jessie ``` ##### SUMMARY I'm used the apt module with autoremove option and this error was showed when I run the task: ``` TASK [common : autoremove unused packages] ************************************* fatal: [172.30.2.214]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 842, in \r\n main()\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 802, in main\r\n for package in packages:\r\nTypeError: 'NoneType' object is not iterable\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ##### STEPS TO REPRODUCE Create this task: ``` yaml - name: autoremove unused packages apt: autoremove: yes sudo: yes tags: - infra ``` Put that task inside a simple role Create a playbook: ``` yaml - hosts: all role: - myrolename environment: TMOUT: 600 ``` Run the playbook ``` ansible-playbook -i ec2.py -v -b myplay.yml ``` P.S.: my host is an instance running on amazon ec2 ##### EXPECTED RESULTS This error: ``` bash TASK [myrolename : autoremove unused packages] ************************************* fatal: [172.30.2.214]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 842, in \r\n main()\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 802, in main\r\n for package in packages:\r\nTypeError: 'NoneType' object is not iterable\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ##### ACTUAL RESULTS Running with -vvvv: ``` TASK [myrolename : autoremove unused packages] ************************************* task path: /home/galindro/git_repos/devops/roles/myrolename/tasks/debian.yml:52 <172.30.2.214> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.30.2.214> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' -o StrictHostKeyChecking=no -o Port=40 -o 'IdentityFile=""sbkey_20150204_app.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 172.30.2.214 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254 `"" && echo ansible-tmp-1466722720.22-229804017504254=""` echo $HOME/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254 `"" ) && sleep 0'""'""'' <172.30.2.214> PUT /tmp/tmpBiapsW TO /home/ubuntu/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254/apt <172.30.2.214> SSH: EXEC sftp -b - -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' -o StrictHostKeyChecking=no -o Port=40 -o 'IdentityFile=""sbkey_20150204_app.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 '[172.30.2.214]' <172.30.2.214> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.30.2.214> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' -o StrictHostKeyChecking=no -o Port=40 -o 'IdentityFile=""sbkey_20150204_app.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -tt 172.30.2.214 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-yhgovfdopoujnzwegznjpbbaoqcpcnyd; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 TMOUT=600 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254/apt; rm -rf ""/home/ubuntu/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [172.30.2.214]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""apt""}, ""module_stderr"": ""OpenSSH_6.7p1 Debian-5+deb8u2, OpenSSL 1.0.1t 3 May 2016\r\ndebug1: Reading configuration data /home/galindro/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 32546\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 172.30.2.214 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_uvVsjU/ansible_module_apt.py\"", line 842, in \r\n main()\r\n File \""/tmp/ansible_uvVsjU/ansible_module_apt.py\"", line 802, in main\r\n for package in packages:\r\nTypeError: 'NoneType' object is not iterable\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"Error with module apt autoremove - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt autoremove ##### ANSIBLE VERSION ``` bash ansible 2.1.0.0 config file = /home/galindro/git_repos/devops/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` bash $ cat ansible.cfg [defaults] host_key_checking = False [ssh_connection] pipelining=False ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=""~/.ansible/tmp/ansible-ssh-%h-%p-%r"" ``` I'm NOT using any ANSIBLE_\* environment variable ##### OS / ENVIRONMENT ``` No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.5 (jessie) Release: 8.5 Codename: jessie ``` ##### SUMMARY I'm used the apt module with autoremove option and this error was showed when I run the task: ``` TASK [common : autoremove unused packages] ************************************* fatal: [172.30.2.214]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 842, in \r\n main()\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 802, in main\r\n for package in packages:\r\nTypeError: 'NoneType' object is not iterable\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ##### STEPS TO REPRODUCE Create this task: ``` yaml - name: autoremove unused packages apt: autoremove: yes sudo: yes tags: - infra ``` Put that task inside a simple role Create a playbook: ``` yaml - hosts: all role: - myrolename environment: TMOUT: 600 ``` Run the playbook ``` ansible-playbook -i ec2.py -v -b myplay.yml ``` P.S.: my host is an instance running on amazon ec2 ##### EXPECTED RESULTS This error: ``` bash TASK [myrolename : autoremove unused packages] ************************************* fatal: [172.30.2.214]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 842, in \r\n main()\r\n File \""/tmp/ansible_RpjDPt/ansible_module_apt.py\"", line 802, in main\r\n for package in packages:\r\nTypeError: 'NoneType' object is not iterable\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ##### ACTUAL RESULTS Running with -vvvv: ``` TASK [myrolename : autoremove unused packages] ************************************* task path: /home/galindro/git_repos/devops/roles/myrolename/tasks/debian.yml:52 <172.30.2.214> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.30.2.214> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' -o StrictHostKeyChecking=no -o Port=40 -o 'IdentityFile=""sbkey_20150204_app.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 172.30.2.214 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254 `"" && echo ansible-tmp-1466722720.22-229804017504254=""` echo $HOME/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254 `"" ) && sleep 0'""'""'' <172.30.2.214> PUT /tmp/tmpBiapsW TO /home/ubuntu/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254/apt <172.30.2.214> SSH: EXEC sftp -b - -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' -o StrictHostKeyChecking=no -o Port=40 -o 'IdentityFile=""sbkey_20150204_app.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 '[172.30.2.214]' <172.30.2.214> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.30.2.214> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' -o StrictHostKeyChecking=no -o Port=40 -o 'IdentityFile=""sbkey_20150204_app.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -tt 172.30.2.214 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-yhgovfdopoujnzwegznjpbbaoqcpcnyd; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 TMOUT=600 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254/apt; rm -rf ""/home/ubuntu/.ansible/tmp/ansible-tmp-1466722720.22-229804017504254/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [172.30.2.214]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""apt""}, ""module_stderr"": ""OpenSSH_6.7p1 Debian-5+deb8u2, OpenSSL 1.0.1t 3 May 2016\r\ndebug1: Reading configuration data /home/galindro/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 32546\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 172.30.2.214 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_uvVsjU/ansible_module_apt.py\"", line 842, in \r\n main()\r\n File \""/tmp/ansible_uvVsjU/ansible_module_apt.py\"", line 802, in main\r\n for package in packages:\r\nTypeError: 'NoneType' object is not iterable\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,error with module apt autoremove issue type bug report component name apt autoremove ansible version bash ansible config file home galindro git repos devops ansible cfg configured module search path default w o overrides configuration bash cat ansible cfg host key checking false pipelining false ssh args o forwardagent yes o controlmaster auto o controlpersist o controlpath ansible tmp ansible ssh h p r i m not using any ansible environment variable os environment no lsb modules are available distributor id debian description debian gnu linux jessie release codename jessie summary i m used the apt module with autoremove option and this error was showed when i run the task task fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file tmp ansible rpjdpt ansible module apt py line in r n main r n file tmp ansible rpjdpt ansible module apt py line in main r n for package in packages r ntypeerror nonetype object is not iterable r n msg module failure parsed false steps to reproduce create this task yaml name autoremove unused packages apt autoremove yes sudo yes tags infra put that task inside a simple role create a playbook yaml hosts all role myrolename environment tmout run the playbook ansible playbook i py v b myplay yml p s my host is an instance running on amazon expected results this error bash task fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file tmp ansible rpjdpt ansible module apt py line in r n main r n file tmp ansible rpjdpt ansible module apt py line in main r n for package in packages r ntypeerror nonetype object is not iterable r n msg module failure parsed false actual results running with vvvv task task path home galindro git repos devops roles myrolename tasks debian yml establish ssh connection for user ubuntu ssh exec ssh c vvv o forwardagent yes o controlmaster auto o controlpersist o controlpath ansible tmp ansible ssh h p r o stricthostkeychecking no o port o identityfile sbkey app pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpbiapsw to home ubuntu ansible tmp ansible tmp apt ssh exec sftp b c vvv o forwardagent yes o controlmaster auto o controlpersist o controlpath ansible tmp ansible ssh h p r o stricthostkeychecking no o port o identityfile sbkey app pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout establish ssh connection for user ubuntu ssh exec ssh c vvv o forwardagent yes o controlmaster auto o controlpersist o controlpath ansible tmp ansible ssh h p r o stricthostkeychecking no o port o identityfile sbkey app pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout tt bin sh c sudo h s n u root bin sh c echo become success yhgovfdopoujnzwegznjpbbaoqcpcnyd lang en us utf lc all en us utf tmout lc messages en us utf usr bin python home ubuntu ansible tmp ansible tmp apt rm rf home ubuntu ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name apt module stderr openssh debian openssl may r reading configuration data home galindro ssh config r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to closed r n module stdout traceback most recent call last r n file tmp ansible uvvsju ansible module apt py line in r n main r n file tmp ansible uvvsju ansible module apt py line in main r n for package in packages r ntypeerror nonetype object is not iterable r n msg module failure parsed false ,1 1736,6574864289.0,IssuesEvent,2017-09-11 14:19:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Document Git module's result,affects_2.1 docs_report waiting_on_maintainer,"##### ISSUE TYPE Documentation Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### SUMMARY The Git module docs don't indicate that any result is available. In fact, the git module provides ""after"" and ""before"" results which are the commit IDs before and after the git command was run (I presume). This should be documented. ",True,"Document Git module's result - ##### ISSUE TYPE Documentation Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### SUMMARY The Git module docs don't indicate that any result is available. In fact, the git module provides ""after"" and ""before"" results which are the commit IDs before and after the git command was run (I presume). This should be documented. ",1,document git module s result issue type documentation report component name git ansible version ansible config file configured module search path default w o overrides summary the git module docs don t indicate that any result is available in fact the git module provides after and before results which are the commit ids before and after the git command was run i presume this should be documented ,1 1692,6574191942.0,IssuesEvent,2017-09-11 11:54:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_facts: very limited functionality,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Distributor ID: Ubuntu Description: Ubuntu 16.04.1 LTS Release: 16.04 Codename: xenial ##### SUMMARY looks like ios_facts module covering only cisco routers, but switches running different version of software failing with different commands failure ##### STEPS TO REPRODUCE Tried on three different devices: WS-C4500X-16 - didn't worked bootflash:/cat4500e-universalk9.SPA.03.06.04.E.152-2.E4.bin WS-C4507R - didn't worked slot0:cat4500-entservicesk9-mz.122-31.SGA7.bin ISR4451-X/K9 - worked bootflash:/isr4400-universalk9.03.11.01.S.154-1.S1-std.SPA.bin ``` - name: DEFINE set_fact: connection: host: ""{{ inventory_hostname }}"" username: ""{{ creds['username'] }}"" password: ""{{ creds['password'] }}"" - name: FACTS ios_facts: provider: ""{{ connection }}"" gather_subset: all ``` ##### EXPECTED RESULTS Need to improve functionality or add definition which type of cisco devices can be used ##### ACTUAL RESULTS Below you can find example, with problem related to command which not existing in switch envrionment ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: lgb3-wvrtcore-a# fatal: [lgb3-wvrtcore-a]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_a6NdY9/ansible_module_ios_facts.py\"", line 461, in \n main()\n File \""/tmp/ansible_a6NdY9/ansible_module_ios_facts.py\"", line 443, in main\n runner.run()\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 170, in run\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 98, in run_commands\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 253, in run_commands\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 250, in execute\nansible.module_utils.network.NetworkError: matched error in response: show memory statistics | include Processor\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nlgb3-wvrtcore-a#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",True,"ios_facts: very limited functionality - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Distributor ID: Ubuntu Description: Ubuntu 16.04.1 LTS Release: 16.04 Codename: xenial ##### SUMMARY looks like ios_facts module covering only cisco routers, but switches running different version of software failing with different commands failure ##### STEPS TO REPRODUCE Tried on three different devices: WS-C4500X-16 - didn't worked bootflash:/cat4500e-universalk9.SPA.03.06.04.E.152-2.E4.bin WS-C4507R - didn't worked slot0:cat4500-entservicesk9-mz.122-31.SGA7.bin ISR4451-X/K9 - worked bootflash:/isr4400-universalk9.03.11.01.S.154-1.S1-std.SPA.bin ``` - name: DEFINE set_fact: connection: host: ""{{ inventory_hostname }}"" username: ""{{ creds['username'] }}"" password: ""{{ creds['password'] }}"" - name: FACTS ios_facts: provider: ""{{ connection }}"" gather_subset: all ``` ##### EXPECTED RESULTS Need to improve functionality or add definition which type of cisco devices can be used ##### ACTUAL RESULTS Below you can find example, with problem related to command which not existing in switch envrionment ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: lgb3-wvrtcore-a# fatal: [lgb3-wvrtcore-a]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_a6NdY9/ansible_module_ios_facts.py\"", line 461, in \n main()\n File \""/tmp/ansible_a6NdY9/ansible_module_ios_facts.py\"", line 443, in main\n runner.run()\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 170, in run\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 98, in run_commands\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 253, in run_commands\n File \""/tmp/ansible_a6NdY9/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 250, in execute\nansible.module_utils.network.NetworkError: matched error in response: show memory statistics | include Processor\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nlgb3-wvrtcore-a#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",1,ios facts very limited functionality issue type bug report component name ios facts ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific distributor id ubuntu description ubuntu lts release codename xenial summary looks like ios facts module covering only cisco routers but switches running different version of software failing with different commands failure steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used tried on three different devices ws didn t worked bootflash spa e bin ws didn t worked mz bin x worked bootflash s std spa bin name define set fact connection host inventory hostname username creds password creds name facts ios facts provider connection gather subset all expected results need to improve functionality or add definition which type of cisco devices can be used actual results below you can find example with problem related to command which not existing in switch envrionment an exception occurred during task execution to see the full traceback use vvv the error was wvrtcore a fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible ansible module ios facts py line in n main n file tmp ansible ansible module ios facts py line in main n runner run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands n file tmp ansible ansible modlib zip ansible module utils shell py line in run commands n file tmp ansible ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response show memory statistics include processor r n r n invalid input detected at marker r n r wvrtcore a n module stdout msg module failure ,1 750,4351318341.0,IssuesEvent,2016-07-31 19:51:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt-rpm,bug_report waiting_on_maintainer,"##### Issue Type: Bug Report ##### Component Name: apt_rpm ##### Ansible Version: N/A ##### Summary: When I try to install a package using this command. It gives me a error. ansible remote_hosts -m apt_rpm -s -a ""pkg=elinks state=present"" ![image](https://cloud.githubusercontent.com/assets/16080783/12416617/7103fbee-bec8-11e5-9284-0ab4a5921b9b.png) ![image](https://cloud.githubusercontent.com/assets/16080783/12416638/9467d0e2-bec8-11e5-9f9c-7c73275f5bb9.png) FYI, I tried installing through Apt-get command and it works.",True,"apt-rpm - ##### Issue Type: Bug Report ##### Component Name: apt_rpm ##### Ansible Version: N/A ##### Summary: When I try to install a package using this command. It gives me a error. ansible remote_hosts -m apt_rpm -s -a ""pkg=elinks state=present"" ![image](https://cloud.githubusercontent.com/assets/16080783/12416617/7103fbee-bec8-11e5-9284-0ab4a5921b9b.png) ![image](https://cloud.githubusercontent.com/assets/16080783/12416638/9467d0e2-bec8-11e5-9f9c-7c73275f5bb9.png) FYI, I tried installing through Apt-get command and it works.",1,apt rpm issue type bug report component name apt rpm ansible version n a summary when i try to install a package using this command it gives me a error ansible remote hosts m apt rpm s a pkg elinks state present fyi i tried installing through apt get command and it works ,1 1701,6574387311.0,IssuesEvent,2017-09-11 12:42:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,dellos9_command confirm prompt timeout,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME dellos9_command ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY copy config to the startup-config on the local flash breaks because of yes/no prompt. ##### STEPS TO REPRODUCE ``` - name: issue delete and copy old config dellos9_command: provider: ""{{ cli }}"" commands: - ""delete flash://{{ inventory_hostname }}.conf no-confirm"" - ""delete flash://startup-conifg no-confirm"" - ""copy tftp://10.10.240.253:/{{ inventory_hostname }}.conf flash://{{ inventory_hostname }}.conf"" - ""copy flash://{{ inventory_hostname }}.conf startup-config"" ``` ##### EXPECTED RESULTS copy command should have completed and provided a default yes for the os prompt. rr1-n22-r14-4048hl-5-1a#$sh://rr1-n22-r14-4048hl-5-1a.conf startup-config File with same name already exist. Proceed to copy the file [confirm yes/no]: ##### ACTUAL RESULTS ``` fatal: [rr1-n22-r14-4048hl-5-1a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""delete flash://rr1-n22-r14-4048hl-5-1a.conf no-confirm"", ""delete flash://startup-conifg no-confirm"", ""copy tftp://10.10.240.253:/rr1-n22-r14-4048hl-5-1a.conf flash://rr1-n22-r14-4048hl-5-1a.conf"", ""copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config"" ], ""host"": ""10.10.234.161"", ""interval"": 1, ""password"": null, ""port"": null, ""provider"": { ""host"": ""10.10.234.161"", ""ssh_keyfile"": ""/srv/tftpboot/masd-rsa.pub"", ""transport"": ""cli"", ""username"": ""admin"" }, ""retries"": 10, ""ssh_keyfile"": ""/srv/tftpboot/masd-rsa.pub"", ""timeout"": 10, ""transport"": ""cli"", ""username"": ""admin"", ""wait_for"": null }, ""module_name"": ""dellos9_command"" }, ""msg"": ""timeout trying to send command: copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config\r"" } ``` ",True,"dellos9_command confirm prompt timeout - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME dellos9_command ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY copy config to the startup-config on the local flash breaks because of yes/no prompt. ##### STEPS TO REPRODUCE ``` - name: issue delete and copy old config dellos9_command: provider: ""{{ cli }}"" commands: - ""delete flash://{{ inventory_hostname }}.conf no-confirm"" - ""delete flash://startup-conifg no-confirm"" - ""copy tftp://10.10.240.253:/{{ inventory_hostname }}.conf flash://{{ inventory_hostname }}.conf"" - ""copy flash://{{ inventory_hostname }}.conf startup-config"" ``` ##### EXPECTED RESULTS copy command should have completed and provided a default yes for the os prompt. rr1-n22-r14-4048hl-5-1a#$sh://rr1-n22-r14-4048hl-5-1a.conf startup-config File with same name already exist. Proceed to copy the file [confirm yes/no]: ##### ACTUAL RESULTS ``` fatal: [rr1-n22-r14-4048hl-5-1a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""delete flash://rr1-n22-r14-4048hl-5-1a.conf no-confirm"", ""delete flash://startup-conifg no-confirm"", ""copy tftp://10.10.240.253:/rr1-n22-r14-4048hl-5-1a.conf flash://rr1-n22-r14-4048hl-5-1a.conf"", ""copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config"" ], ""host"": ""10.10.234.161"", ""interval"": 1, ""password"": null, ""port"": null, ""provider"": { ""host"": ""10.10.234.161"", ""ssh_keyfile"": ""/srv/tftpboot/masd-rsa.pub"", ""transport"": ""cli"", ""username"": ""admin"" }, ""retries"": 10, ""ssh_keyfile"": ""/srv/tftpboot/masd-rsa.pub"", ""timeout"": 10, ""transport"": ""cli"", ""username"": ""admin"", ""wait_for"": null }, ""module_name"": ""dellos9_command"" }, ""msg"": ""timeout trying to send command: copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config\r"" } ``` ",1, command confirm prompt timeout issue type bug report component name command ansible version ansible config file home emarq solutions network automation mas ansible dell ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux generic ubuntu smp wed oct utc gnu linux summary copy config to the startup config on the local flash breaks because of yes no prompt steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name issue delete and copy old config command provider cli commands delete flash inventory hostname conf no confirm delete flash startup conifg no confirm copy tftp inventory hostname conf flash inventory hostname conf copy flash inventory hostname conf startup config expected results copy command should have completed and provided a default yes for the os prompt sh conf startup config file with same name already exist proceed to copy the file actual results fatal failed changed false failed true invocation module args auth pass null authorize false commands delete flash conf no confirm delete flash startup conifg no confirm copy tftp conf flash conf copy flash conf startup config host interval password null port null provider host ssh keyfile srv tftpboot masd rsa pub transport cli username admin retries ssh keyfile srv tftpboot masd rsa pub timeout transport cli username admin wait for null module name command msg timeout trying to send command copy flash conf startup config r ,1 1870,6577493454.0,IssuesEvent,2017-09-12 01:18:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,os_router can't take in port id as interface,affects_2.0 cloud feature_idea openstack waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_router ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /root/setup-infra/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes made to ansible.cfg ##### OS / ENVIRONMENT I'm running Ubuntu 14.04, but this module is not platform-specific I don't think. ##### SUMMARY os_router can't take in a port ID as an internal interface, only a subnet. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L321 The neutron CLI allows you to specify a port ID as an interface, and therefore allow you to specify an arbitrary IP for that interface. It would be nice if the Ansible os_router module would allow you to do that. ##### STEPS TO REPRODUCE This added feature would allow you to do something like: ``` - name: Create port for my_net os_port: state: present name: ""my_net_port"" network: ""my_net"" fixed_ips: - ip_address: ""192.168.100.50"" register: my_net_port_results - name: Create my router os_router: name: my_router state: present network: ""ext-net"" interfaces: - port: ""{{ my_net_port_results.id }}"" - ""some_other_priv_subnet"" ``` This would allow the user to specify either a subnet or a port for a router internal interface. ##### EXPECTED RESULTS The router would have two interfaces with the example playbook shown above. It would have the default gateway of ""some_other_priv_subnet"", and it would have the ip assigned to ""my_net_port"". This would allow subnets to be attached to multiple routers, which currently isn't do-able through the os_router module. ##### ACTUAL RESULTS TBD ",True,"os_router can't take in port id as interface - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_router ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /root/setup-infra/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes made to ansible.cfg ##### OS / ENVIRONMENT I'm running Ubuntu 14.04, but this module is not platform-specific I don't think. ##### SUMMARY os_router can't take in a port ID as an internal interface, only a subnet. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L321 The neutron CLI allows you to specify a port ID as an interface, and therefore allow you to specify an arbitrary IP for that interface. It would be nice if the Ansible os_router module would allow you to do that. ##### STEPS TO REPRODUCE This added feature would allow you to do something like: ``` - name: Create port for my_net os_port: state: present name: ""my_net_port"" network: ""my_net"" fixed_ips: - ip_address: ""192.168.100.50"" register: my_net_port_results - name: Create my router os_router: name: my_router state: present network: ""ext-net"" interfaces: - port: ""{{ my_net_port_results.id }}"" - ""some_other_priv_subnet"" ``` This would allow the user to specify either a subnet or a port for a router internal interface. ##### EXPECTED RESULTS The router would have two interfaces with the example playbook shown above. It would have the default gateway of ""some_other_priv_subnet"", and it would have the ip assigned to ""my_net_port"". This would allow subnets to be attached to multiple routers, which currently isn't do-able through the os_router module. ##### ACTUAL RESULTS TBD ",1,os router can t take in port id as interface issue type feature idea component name os router ansible version ansible config file root setup infra ansible cfg configured module search path default w o overrides configuration no changes made to ansible cfg os environment i m running ubuntu but this module is not platform specific i don t think summary os router can t take in a port id as an internal interface only a subnet see the neutron cli allows you to specify a port id as an interface and therefore allow you to specify an arbitrary ip for that interface it would be nice if the ansible os router module would allow you to do that steps to reproduce this added feature would allow you to do something like name create port for my net os port state present name my net port network my net fixed ips ip address register my net port results name create my router os router name my router state present network ext net interfaces port my net port results id some other priv subnet this would allow the user to specify either a subnet or a port for a router internal interface expected results the router would have two interfaces with the example playbook shown above it would have the default gateway of some other priv subnet and it would have the ip assigned to my net port this would allow subnets to be attached to multiple routers which currently isn t do able through the os router module actual results tbd ,1 792,4389994710.0,IssuesEvent,2016-08-09 00:42:21,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive failed to transfer on devel,bug_report P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ansible 2.2.0 (devel 134d70e7b9) last updated 2016/07/18 12:43:12 (GMT +1000) lib/ansible/modules/core: (detached HEAD 7de287237f) last updated 2016/07/18 12:43:16 (GMT +1000) lib/ansible/modules/extras: (detached HEAD 68ca157f3b) last updated 2016/07/18 12:43:16 (GMT +1000) config file = /home/linus/Documents/ansible-playbooks/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} hostfile = ./inventory remote_user = centos host_key_checking = False log_path = /var/log/ansible.log ##### OS / ENVIRONMENT management node: ubuntu 14.04 remote node: centos 7 ##### SUMMARY unarchive module shows error when src is an url using latest Ansible devel but works fine using the same playbook using Ansible stable 2.1.0.0 version. ##### STEPS TO REPRODUCE ``` unarchive: copy: no src: https://someurl.tar.gz dest: /tmp/somefolder owner: user group: user ``` ##### EXPECTED RESULTS changed ##### ACTUAL RESULTS fatal: [xx.xx.xx.xx]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Source 'https://someurl.tar.gz' failed to transfer""} ",True,"unarchive failed to transfer on devel - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ansible 2.2.0 (devel 134d70e7b9) last updated 2016/07/18 12:43:12 (GMT +1000) lib/ansible/modules/core: (detached HEAD 7de287237f) last updated 2016/07/18 12:43:16 (GMT +1000) lib/ansible/modules/extras: (detached HEAD 68ca157f3b) last updated 2016/07/18 12:43:16 (GMT +1000) config file = /home/linus/Documents/ansible-playbooks/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} hostfile = ./inventory remote_user = centos host_key_checking = False log_path = /var/log/ansible.log ##### OS / ENVIRONMENT management node: ubuntu 14.04 remote node: centos 7 ##### SUMMARY unarchive module shows error when src is an url using latest Ansible devel but works fine using the same playbook using Ansible stable 2.1.0.0 version. ##### STEPS TO REPRODUCE ``` unarchive: copy: no src: https://someurl.tar.gz dest: /tmp/somefolder owner: user group: user ``` ##### EXPECTED RESULTS changed ##### ACTUAL RESULTS fatal: [xx.xx.xx.xx]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Source 'https://someurl.tar.gz' failed to transfer""} ",1,unarchive failed to transfer on devel issue type bug report component name unarchive ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home linus documents ansible playbooks ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible managed ansible managed file modified on y m d h m s by uid on host hostfile inventory remote user centos host key checking false log path var log ansible log os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific management node ubuntu remote node centos summary unarchive module shows error when src is an url using latest ansible devel but works fine using the same playbook using ansible stable version steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used unarchive copy no src dest tmp somefolder owner user group user expected results changed actual results fatal failed changed false failed true msg source failed to transfer ,1 1717,6574472923.0,IssuesEvent,2017-09-11 13:01:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Copy Module Fails with relatively large files,affects_2.3 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Copy module ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel 32a7b4ce71) last updated 2016/11/03 11:04:12 (GMT -500) lib/ansible/modules/core: (detached HEAD 7cc4d3fe04) last updated 2016/11/03 11:04:33 (GMT -500) lib/ansible/modules/extras: (detached HEAD e4bc618956) last updated 2016/11/03 11:04:54 (GMT -500) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat /etc/ansible/hosts local ansible_host=127.0.0.1 test_host ansible_host=192.168.56.50 $ egrep -v ""^#|^$"" /etc/ansible/ansible.cfg [defaults] log_path = /var/log/ansible.log [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT ``` $ hostnamectl Static hostname: ansible Icon name: computer-vm Chassis: vm Boot ID: 44bef02e34ee4cad9ddf55df52cb03c5 Operating System: Ubuntu 14.04.5 LTS Kernel: Linux 3.13.0-100-generic Architecture: x86_64 ``` ##### SUMMARY When trying to copy a large file (jdk installer 195Mb) the copy module fails, but works perfectly with small files (conf files for example) ##### STEPS TO REPRODUCE Run an ad-hoc command with the large file ``` ansible -vvv test_host -s -m copy -a 'src=/vagrant/jdk-8u111-windows-x64.exe dest=/var/www/html/ owner=www-data group=www-data mode=0644' ``` ##### EXPECTED RESULTS Expected a successfully file copy ##### ACTUAL RESULTS It fails with MemoryError ``` Using /etc/ansible/ansible.cfg as config file An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 490, in _execute result = self._handler.run(task_vars=variables) File ""/home/vagrant/ansible/lib/ansible/plugins/action/copy.py"", line 157, in run source_full = self._loader.get_real_file(source_full) File ""/home/vagrant/ansible/lib/ansible/parsing/dataloader.py"", line 402, in get_real_file if is_encrypted_file(f): File ""/home/vagrant/ansible/lib/ansible/parsing/vault/__init__.py"", line 152, in is_encrypted_file b_vaulttext = to_bytes(to_text(vaulttext, encoding='ascii', errors='strict'), encoding='ascii', errors='strict') File ""/home/vagrant/ansible/lib/ansible/module_utils/_text.py"", line 177, in to_text return obj.decode(encoding, errors) MemoryError test_host | FAILED! => { ""failed"": true, ""msg"": ""Unexpected failure during module execution."", ""stdout"": """" } ```",True,"Copy Module Fails with relatively large files - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Copy module ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel 32a7b4ce71) last updated 2016/11/03 11:04:12 (GMT -500) lib/ansible/modules/core: (detached HEAD 7cc4d3fe04) last updated 2016/11/03 11:04:33 (GMT -500) lib/ansible/modules/extras: (detached HEAD e4bc618956) last updated 2016/11/03 11:04:54 (GMT -500) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat /etc/ansible/hosts local ansible_host=127.0.0.1 test_host ansible_host=192.168.56.50 $ egrep -v ""^#|^$"" /etc/ansible/ansible.cfg [defaults] log_path = /var/log/ansible.log [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT ``` $ hostnamectl Static hostname: ansible Icon name: computer-vm Chassis: vm Boot ID: 44bef02e34ee4cad9ddf55df52cb03c5 Operating System: Ubuntu 14.04.5 LTS Kernel: Linux 3.13.0-100-generic Architecture: x86_64 ``` ##### SUMMARY When trying to copy a large file (jdk installer 195Mb) the copy module fails, but works perfectly with small files (conf files for example) ##### STEPS TO REPRODUCE Run an ad-hoc command with the large file ``` ansible -vvv test_host -s -m copy -a 'src=/vagrant/jdk-8u111-windows-x64.exe dest=/var/www/html/ owner=www-data group=www-data mode=0644' ``` ##### EXPECTED RESULTS Expected a successfully file copy ##### ACTUAL RESULTS It fails with MemoryError ``` Using /etc/ansible/ansible.cfg as config file An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 490, in _execute result = self._handler.run(task_vars=variables) File ""/home/vagrant/ansible/lib/ansible/plugins/action/copy.py"", line 157, in run source_full = self._loader.get_real_file(source_full) File ""/home/vagrant/ansible/lib/ansible/parsing/dataloader.py"", line 402, in get_real_file if is_encrypted_file(f): File ""/home/vagrant/ansible/lib/ansible/parsing/vault/__init__.py"", line 152, in is_encrypted_file b_vaulttext = to_bytes(to_text(vaulttext, encoding='ascii', errors='strict'), encoding='ascii', errors='strict') File ""/home/vagrant/ansible/lib/ansible/module_utils/_text.py"", line 177, in to_text return obj.decode(encoding, errors) MemoryError test_host | FAILED! => { ""failed"": true, ""msg"": ""Unexpected failure during module execution."", ""stdout"": """" } ```",1,copy module fails with relatively large files issue type bug report component name copy module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides configuration cat etc ansible hosts local ansible host test host ansible host egrep v etc ansible ansible cfg log path var log ansible log os environment hostnamectl static hostname ansible icon name computer vm chassis vm boot id operating system ubuntu lts kernel linux generic architecture summary when trying to copy a large file jdk installer the copy module fails but works perfectly with small files conf files for example steps to reproduce run an ad hoc command with the large file ansible vvv test host s m copy a src vagrant jdk windows exe dest var www html owner www data group www data mode expected results expected a successfully file copy actual results it fails with memoryerror using etc ansible ansible cfg as config file an exception occurred during task execution the full traceback is traceback most recent call last file home vagrant ansible lib ansible executor task executor py line in run res self execute file home vagrant ansible lib ansible executor task executor py line in execute result self handler run task vars variables file home vagrant ansible lib ansible plugins action copy py line in run source full self loader get real file source full file home vagrant ansible lib ansible parsing dataloader py line in get real file if is encrypted file f file home vagrant ansible lib ansible parsing vault init py line in is encrypted file b vaulttext to bytes to text vaulttext encoding ascii errors strict encoding ascii errors strict file home vagrant ansible lib ansible module utils text py line in to text return obj decode encoding errors memoryerror test host failed failed true msg unexpected failure during module execution stdout ,1 1795,6575902165.0,IssuesEvent,2017-09-11 17:46:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,shell - not reading .login/.bash_login or .profile/.bash_profile or .bashrc,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/harald/vagrantstuff/node/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg only added this line: inventory = ./hosts Otherwise it's untouched. ##### OS / ENVIRONMENT Linux (Ubuntu on host, Debian on node to be configured) ##### SUMMARY Commands via ""shell"" do not read the .profile, .bashrc, .login, .bash_profile of the user of who runs ansible. ##### STEPS TO REPRODUCE 1. Have set up ssh passwordless authentication between Ansible host and worker node (=node3) 2. User has /bin/bash as default shell on node 3. On node have ""export LAST=bashrc"" resp. ""export LAST=bash_profile"" in ~/.bashrc resp. ~/.bash_profile (LAST would contain the last profile file used) 4. ""ssh node3"" with subsequent echo $LAST will show bash_profile 5. ansible node3 -m shell -a 'echo $LAST' will show nothing ``` harald@giga:~/vagrantstuff/node/ansible$ ssh node3 ---------------------------------------------------------------- Debian GNU/Linux 8.5 (jessie) built 2016-08-28 ---------------------------------------------------------------- Last login: Thu Sep 22 10:07:18 2016 from giga.lan harald@node3:~$ echo ""SHELL=$SHELL LAST=$LAST"" SHELL=/bin/bash LAST=bash_profile harald@node3:~$ grep harald /etc/passwd harald:x:2000:100:Harald Kubota:/home/harald:/bin/bash harald@node3:~$ egrep 'LAST=|export LAST' .profile .bashrc .login .bash_login .bash_profile .profile:LAST=profile .profile:export LAST .bashrc:LAST=bashrc .bashrc:export LAST .login:LAST=login .login:export LAST .bash_login:LAST=bash_login .bash_login:export LAST .bash_profile:LAST=bash_profile .bash_profile:export LAST harald@node3:~$ exit logout Connection to node3 closed. harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` Using a playbook reveals the same: the .profile or .bash_profile since /bin/bash is my shell, is not used. ``` - hosts: node3 gather_facts: true tasks: - name: Testing to run node shell: echo ""SHELL=$SHELL LAST=$LAST"" #environment: # PATH: ""/home/harald/node:{{ ansible_env.PATH }}"" args: executable: /bin/bash ``` ##### EXPECTED RESULTS The profile files should be used. Manual setting PATH environment variable in playbooks works, but this is highly inconvenient if this is needed for every shell statement. ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST=bash_profile ``` ##### ACTUAL RESULTS ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` or via ansible-playbook: ``` harald@giga:~/vagrantstuff/node/ansible$ ansible-playbook -v play.yml Using /home/harald/vagrantstuff/node/ansible/ansible.cfg as config file PLAY [node3] ******************************************************************* TASK [setup] ******************************************************************* ok: [node3] TASK [Testing to run node] ***************************************************** changed: [node3] => {""changed"": true, ""cmd"": ""echo \""SHELL=$SHELL LAST=$LAST\"""", ""delta"": ""0:00:00.008589"", ""end"": ""2016-09-22 10:20:48.320586"", ""rc"": 0, ""start"": ""2016-09-22 10:20:48.311997"", ""stderr"": """", ""stdout"": ""SHELL=/bin/bash LAST="", ""stdout_lines"": [""SHELL=/bin/bash LAST=""], ""warnings"": []} PLAY RECAP ********************************************************************* node3 : ok=2 changed=1 unreachable=0 failed=0 ``` LAST is supposed to show bash_profile just like the interactive login did. ",True,"shell - not reading .login/.bash_login or .profile/.bash_profile or .bashrc - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/harald/vagrantstuff/node/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg only added this line: inventory = ./hosts Otherwise it's untouched. ##### OS / ENVIRONMENT Linux (Ubuntu on host, Debian on node to be configured) ##### SUMMARY Commands via ""shell"" do not read the .profile, .bashrc, .login, .bash_profile of the user of who runs ansible. ##### STEPS TO REPRODUCE 1. Have set up ssh passwordless authentication between Ansible host and worker node (=node3) 2. User has /bin/bash as default shell on node 3. On node have ""export LAST=bashrc"" resp. ""export LAST=bash_profile"" in ~/.bashrc resp. ~/.bash_profile (LAST would contain the last profile file used) 4. ""ssh node3"" with subsequent echo $LAST will show bash_profile 5. ansible node3 -m shell -a 'echo $LAST' will show nothing ``` harald@giga:~/vagrantstuff/node/ansible$ ssh node3 ---------------------------------------------------------------- Debian GNU/Linux 8.5 (jessie) built 2016-08-28 ---------------------------------------------------------------- Last login: Thu Sep 22 10:07:18 2016 from giga.lan harald@node3:~$ echo ""SHELL=$SHELL LAST=$LAST"" SHELL=/bin/bash LAST=bash_profile harald@node3:~$ grep harald /etc/passwd harald:x:2000:100:Harald Kubota:/home/harald:/bin/bash harald@node3:~$ egrep 'LAST=|export LAST' .profile .bashrc .login .bash_login .bash_profile .profile:LAST=profile .profile:export LAST .bashrc:LAST=bashrc .bashrc:export LAST .login:LAST=login .login:export LAST .bash_login:LAST=bash_login .bash_login:export LAST .bash_profile:LAST=bash_profile .bash_profile:export LAST harald@node3:~$ exit logout Connection to node3 closed. harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` Using a playbook reveals the same: the .profile or .bash_profile since /bin/bash is my shell, is not used. ``` - hosts: node3 gather_facts: true tasks: - name: Testing to run node shell: echo ""SHELL=$SHELL LAST=$LAST"" #environment: # PATH: ""/home/harald/node:{{ ansible_env.PATH }}"" args: executable: /bin/bash ``` ##### EXPECTED RESULTS The profile files should be used. Manual setting PATH environment variable in playbooks works, but this is highly inconvenient if this is needed for every shell statement. ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST=bash_profile ``` ##### ACTUAL RESULTS ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` or via ansible-playbook: ``` harald@giga:~/vagrantstuff/node/ansible$ ansible-playbook -v play.yml Using /home/harald/vagrantstuff/node/ansible/ansible.cfg as config file PLAY [node3] ******************************************************************* TASK [setup] ******************************************************************* ok: [node3] TASK [Testing to run node] ***************************************************** changed: [node3] => {""changed"": true, ""cmd"": ""echo \""SHELL=$SHELL LAST=$LAST\"""", ""delta"": ""0:00:00.008589"", ""end"": ""2016-09-22 10:20:48.320586"", ""rc"": 0, ""start"": ""2016-09-22 10:20:48.311997"", ""stderr"": """", ""stdout"": ""SHELL=/bin/bash LAST="", ""stdout_lines"": [""SHELL=/bin/bash LAST=""], ""warnings"": []} PLAY RECAP ********************************************************************* node3 : ok=2 changed=1 unreachable=0 failed=0 ``` LAST is supposed to show bash_profile just like the interactive login did. ",1,shell not reading login bash login or profile bash profile or bashrc issue type bug report component name shell ansible version ansible config file home harald vagrantstuff node ansible ansible cfg configured module search path default w o overrides configuration ansible cfg only added this line inventory hosts otherwise it s untouched os environment linux ubuntu on host debian on node to be configured summary commands via shell do not read the profile bashrc login bash profile of the user of who runs ansible steps to reproduce have set up ssh passwordless authentication between ansible host and worker node user has bin bash as default shell on node on node have export last bashrc resp export last bash profile in bashrc resp bash profile last would contain the last profile file used ssh with subsequent echo last will show bash profile ansible m shell a echo last will show nothing harald giga vagrantstuff node ansible ssh debian gnu linux jessie built last login thu sep from giga lan harald echo shell shell last last shell bin bash last bash profile harald grep harald etc passwd harald x harald kubota home harald bin bash harald egrep last export last profile bashrc login bash login bash profile profile last profile profile export last bashrc last bashrc bashrc export last login last login login export last bash login last bash login bash login export last bash profile last bash profile bash profile export last harald exit logout connection to closed harald giga vagrantstuff node ansible ansible m shell a echo shell shell last last success rc shell bin bash last using a playbook reveals the same the profile or bash profile since bin bash is my shell is not used hosts gather facts true tasks name testing to run node shell echo shell shell last last environment path home harald node ansible env path args executable bin bash expected results the profile files should be used manual setting path environment variable in playbooks works but this is highly inconvenient if this is needed for every shell statement harald giga vagrantstuff node ansible ansible m shell a echo shell shell last last success rc shell bin bash last bash profile actual results harald giga vagrantstuff node ansible ansible m shell a echo shell shell last last success rc shell bin bash last or via ansible playbook harald giga vagrantstuff node ansible ansible playbook v play yml using home harald vagrantstuff node ansible ansible cfg as config file play task ok task changed changed true cmd echo shell shell last last delta end rc start stderr stdout shell bin bash last stdout lines warnings play recap ok changed unreachable failed last is supposed to show bash profile just like the interactive login did ,1 1209,5165219952.0,IssuesEvent,2017-01-17 13:04:56,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 module does not create root volume as documented,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### OS / ENVIRONMENT ubuntu 14.04 ##### SUMMARY Launching an EBS backed ec2 instance with a non-default root volume does not work as shown in the documentation example. Instead it launches an instance with a standard 8GB root volume, and an additional volume per the definition. The following is the task which was used to launch the instance (some keys removed for brevity): ``` - name: Launch ec2 instance ec2: instance_type: ""t2.small"" volumes: - device_name: /dev/xvda volume_type: gp2 volume_size: 15 register: ec2 ``` This is the example from http://docs.ansible.com/ansible/ec2_module.html, which shows a non-standard root volume: ``` # Single instance with ssd gp2 root volume - ec2: key_name: mykey group: webserver instance_type: c3.medium image: ami-123456 wait: yes wait_timeout: 500 volumes: - device_name: /dev/xvda volume_type: gp2 volume_size: 8 vpc_subnet_id: subnet-29e63245 assign_public_ip: yes exact_count: 1 ``` ",True,"ec2 module does not create root volume as documented - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### OS / ENVIRONMENT ubuntu 14.04 ##### SUMMARY Launching an EBS backed ec2 instance with a non-default root volume does not work as shown in the documentation example. Instead it launches an instance with a standard 8GB root volume, and an additional volume per the definition. The following is the task which was used to launch the instance (some keys removed for brevity): ``` - name: Launch ec2 instance ec2: instance_type: ""t2.small"" volumes: - device_name: /dev/xvda volume_type: gp2 volume_size: 15 register: ec2 ``` This is the example from http://docs.ansible.com/ansible/ec2_module.html, which shows a non-standard root volume: ``` # Single instance with ssd gp2 root volume - ec2: key_name: mykey group: webserver instance_type: c3.medium image: ami-123456 wait: yes wait_timeout: 500 volumes: - device_name: /dev/xvda volume_type: gp2 volume_size: 8 vpc_subnet_id: subnet-29e63245 assign_public_ip: yes exact_count: 1 ``` ",1, module does not create root volume as documented issue type bug report component name module ansible version ansible os environment ubuntu summary launching an ebs backed instance with a non default root volume does not work as shown in the documentation example instead it launches an instance with a standard root volume and an additional volume per the definition the following is the task which was used to launch the instance some keys removed for brevity name launch instance instance type small volumes device name dev xvda volume type volume size register this is the example from which shows a non standard root volume single instance with ssd root volume key name mykey group webserver instance type medium image ami wait yes wait timeout volumes device name dev xvda volume type volume size vpc subnet id subnet assign public ip yes exact count ,1 1785,6575859837.0,IssuesEvent,2017-09-11 17:36:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mysql_user is not idempotent with complex priv,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT - Ansible host: Xubuntu 16.04 - Ansible target: Debian Jessie + MariaDB 10.1 (with galera module activated) ##### SUMMARY With ""complex"" privs, _mysql_user_ module is not idempotent. The module should not have a state changed. ##### STEPS TO REPRODUCE ``` - hosts: tag_Db_1 vars: c_mysql_users: - name: ""monitor"" password: ""1a2b3c"" priv: '*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.user:SELECT/mysql.db:SELECT/mysql.tables_priv:SELECT' host: ""%"" - name: ""monitor2"" password: ""1a2b3c"" priv: '*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.*:SELECT' host: ""%"" tasks: - block: - name: MYSQL_USERS | Create users mysql_user: > name=""{{ item.name }}"" password=""{{ item.password }}"" host=""{{ item.host }}"" priv='{{ item.priv }}' with_items: ""{{ c_mysql_users }}"" run_once: true ``` ##### EXPECTED RESULTS On second launch, mysql_user module should not change. ##### ACTUAL RESULTS User ""monitor2"" is not changed but user ""monitor"" have changed state every time ``` PLAYBOOK: bug.yml ************************************************************** 1 plays in bug.yml PLAY [tag_Db_1] **************************************************************** TASK [setup] ******************************************************************* ok: [db2] ok: [db1] TASK [MYSQL_USERS | Create users] ********************************************** task path: /home/hanx/dev/na/ansible/affil/bug.yml:15 changed: [db1] => (item={u'host': u'%', u'password': u'1a2b3c', u'name': u'monitor', u'priv': u'*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.user:SELECT/mysql.db:SELECT/mysql.tables_priv:SELECT'}) => {""changed"": true, ""item"": {""host"": ""%"", ""name"": ""monitor"", ""password"": ""1a2b3c"", ""priv"": ""*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.user:SELECT/mysql.db:SELECT/mysql.tables_priv:SELECT""}, ""user"": ""monitor""} ok: [db1] => (item={u'host': u'%', u'password': u'1a2b3c', u'name': u'monitor2', u'priv': u'*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.*:SELECT'}) => {""changed"": false, ""item"": {""host"": ""%"", ""name"": ""monitor2"", ""password"": ""1a2b3c"", ""priv"": ""*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.*:SELECT""}, ""user"": ""monitor2""} PLAY RECAP ********************************************************************* db1 : ok=2 changed=1 unreachable=0 failed=0 db2 : ok=1 changed=0 unreachable=0 failed=0 ``` There is grant output on my server: ``` MariaDB [(none)]> show grants for 'monitor'; +-------------------------------------------------------------------------------------------------------------------------------------+ | Grants for monitor@% | +-------------------------------------------------------------------------------------------------------------------------------------+ | GRANT SHOW DATABASES, REPLICATION CLIENT ON *.* TO 'monitor'@'%' IDENTIFIED BY PASSWORD '*09D6E9E852C3A9F6BDC1C52834B361317225F914' | | GRANT SELECT ON `mysql`.`db` TO 'monitor'@'%' | | GRANT SELECT ON `mysql`.`user` TO 'monitor'@'%' | | GRANT SELECT ON `mysql`.`tables_priv` TO 'monitor'@'%' | +-------------------------------------------------------------------------------------------------------------------------------------+ 4 rows in set (0.00 sec) MariaDB [(none)]> show grants for 'monitor2'; +--------------------------------------------------------------------------------------------------------------------------------------+ | Grants for monitor2@% | +--------------------------------------------------------------------------------------------------------------------------------------+ | GRANT SHOW DATABASES, REPLICATION CLIENT ON *.* TO 'monitor2'@'%' IDENTIFIED BY PASSWORD '*09D6E9E852C3A9F6BDC1C52834B361317225F914' | | GRANT SELECT ON `mysql`.* TO 'monitor2'@'%' | +--------------------------------------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec) ``` ",True,"mysql_user is not idempotent with complex priv - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT - Ansible host: Xubuntu 16.04 - Ansible target: Debian Jessie + MariaDB 10.1 (with galera module activated) ##### SUMMARY With ""complex"" privs, _mysql_user_ module is not idempotent. The module should not have a state changed. ##### STEPS TO REPRODUCE ``` - hosts: tag_Db_1 vars: c_mysql_users: - name: ""monitor"" password: ""1a2b3c"" priv: '*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.user:SELECT/mysql.db:SELECT/mysql.tables_priv:SELECT' host: ""%"" - name: ""monitor2"" password: ""1a2b3c"" priv: '*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.*:SELECT' host: ""%"" tasks: - block: - name: MYSQL_USERS | Create users mysql_user: > name=""{{ item.name }}"" password=""{{ item.password }}"" host=""{{ item.host }}"" priv='{{ item.priv }}' with_items: ""{{ c_mysql_users }}"" run_once: true ``` ##### EXPECTED RESULTS On second launch, mysql_user module should not change. ##### ACTUAL RESULTS User ""monitor2"" is not changed but user ""monitor"" have changed state every time ``` PLAYBOOK: bug.yml ************************************************************** 1 plays in bug.yml PLAY [tag_Db_1] **************************************************************** TASK [setup] ******************************************************************* ok: [db2] ok: [db1] TASK [MYSQL_USERS | Create users] ********************************************** task path: /home/hanx/dev/na/ansible/affil/bug.yml:15 changed: [db1] => (item={u'host': u'%', u'password': u'1a2b3c', u'name': u'monitor', u'priv': u'*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.user:SELECT/mysql.db:SELECT/mysql.tables_priv:SELECT'}) => {""changed"": true, ""item"": {""host"": ""%"", ""name"": ""monitor"", ""password"": ""1a2b3c"", ""priv"": ""*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.user:SELECT/mysql.db:SELECT/mysql.tables_priv:SELECT""}, ""user"": ""monitor""} ok: [db1] => (item={u'host': u'%', u'password': u'1a2b3c', u'name': u'monitor2', u'priv': u'*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.*:SELECT'}) => {""changed"": false, ""item"": {""host"": ""%"", ""name"": ""monitor2"", ""password"": ""1a2b3c"", ""priv"": ""*.*:SHOW DATABASES,REPLICATION CLIENT/mysql.*:SELECT""}, ""user"": ""monitor2""} PLAY RECAP ********************************************************************* db1 : ok=2 changed=1 unreachable=0 failed=0 db2 : ok=1 changed=0 unreachable=0 failed=0 ``` There is grant output on my server: ``` MariaDB [(none)]> show grants for 'monitor'; +-------------------------------------------------------------------------------------------------------------------------------------+ | Grants for monitor@% | +-------------------------------------------------------------------------------------------------------------------------------------+ | GRANT SHOW DATABASES, REPLICATION CLIENT ON *.* TO 'monitor'@'%' IDENTIFIED BY PASSWORD '*09D6E9E852C3A9F6BDC1C52834B361317225F914' | | GRANT SELECT ON `mysql`.`db` TO 'monitor'@'%' | | GRANT SELECT ON `mysql`.`user` TO 'monitor'@'%' | | GRANT SELECT ON `mysql`.`tables_priv` TO 'monitor'@'%' | +-------------------------------------------------------------------------------------------------------------------------------------+ 4 rows in set (0.00 sec) MariaDB [(none)]> show grants for 'monitor2'; +--------------------------------------------------------------------------------------------------------------------------------------+ | Grants for monitor2@% | +--------------------------------------------------------------------------------------------------------------------------------------+ | GRANT SHOW DATABASES, REPLICATION CLIENT ON *.* TO 'monitor2'@'%' IDENTIFIED BY PASSWORD '*09D6E9E852C3A9F6BDC1C52834B361317225F914' | | GRANT SELECT ON `mysql`.* TO 'monitor2'@'%' | +--------------------------------------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec) ``` ",1,mysql user is not idempotent with complex priv issue type bug report component name mysql user ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none os environment ansible host xubuntu ansible target debian jessie mariadb with galera module activated summary with complex privs mysql user module is not idempotent the module should not have a state changed steps to reproduce hosts tag db vars c mysql users name monitor password priv show databases replication client mysql user select mysql db select mysql tables priv select host name password priv show databases replication client mysql select host tasks block name mysql users create users mysql user name item name password item password host item host priv item priv with items c mysql users run once true expected results on second launch mysql user module should not change actual results user is not changed but user monitor have changed state every time playbook bug yml plays in bug yml play task ok ok task task path home hanx dev na ansible affil bug yml changed item u host u u password u u name u monitor u priv u show databases replication client mysql user select mysql db select mysql tables priv select changed true item host name monitor password priv show databases replication client mysql user select mysql db select mysql tables priv select user monitor ok item u host u u password u u name u u priv u show databases replication client mysql select changed false item host name password priv show databases replication client mysql select user play recap ok changed unreachable failed ok changed unreachable failed there is grant output on my server mariadb show grants for monitor grants for monitor grant show databases replication client on to monitor identified by password grant select on mysql db to monitor grant select on mysql user to monitor grant select on mysql tables priv to monitor rows in set sec mariadb show grants for grants for grant show databases replication client on to identified by password grant select on mysql to rows in set sec ,1 1188,5103431043.0,IssuesEvent,2017-01-04 21:23:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container module doesn't match existing container when entrypoint is used,affects_2.1 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `docker_container` ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT Linux/Fedora 22 ##### SUMMARY When running a task with an `entrypoint` parameter, a docker container is destroyed and recreated each time after the first time it is run. ##### STEPS TO REPRODUCE Fill in the variables and run this twice: ``` - name: Create data container docker_container: name: ""{{ docker_data_name }}"" image: ""{{ image }}"" state: present entrypoint: ""/bin/echo Data-only container for {{ name }}"" ``` On the second run this roduces: ``` TASK [docker-image : Create data container] ************************************ changed: [hostname] ``` Destroy that container, comment out the entrypoint, then run this twice ``` - name: Create data container docker_container: name: ""{{ docker_data_name }}"" image: ""{{ image }}"" state: present #entrypoint: ""/bin/echo Data-only container for {{ name }}"" ``` On the second run this poduces: ``` TASK [docker-image : Create data container] ************************************ ok: [hostname] ``` ##### EXPECTED RESULTS Ansible reports no change. The created timestamp should be the same as the first time it was run. ##### ACTUAL RESULTS Ansible reports a change, and the created timestamp becomes recent. ",True,"docker_container module doesn't match existing container when entrypoint is used - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `docker_container` ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT Linux/Fedora 22 ##### SUMMARY When running a task with an `entrypoint` parameter, a docker container is destroyed and recreated each time after the first time it is run. ##### STEPS TO REPRODUCE Fill in the variables and run this twice: ``` - name: Create data container docker_container: name: ""{{ docker_data_name }}"" image: ""{{ image }}"" state: present entrypoint: ""/bin/echo Data-only container for {{ name }}"" ``` On the second run this roduces: ``` TASK [docker-image : Create data container] ************************************ changed: [hostname] ``` Destroy that container, comment out the entrypoint, then run this twice ``` - name: Create data container docker_container: name: ""{{ docker_data_name }}"" image: ""{{ image }}"" state: present #entrypoint: ""/bin/echo Data-only container for {{ name }}"" ``` On the second run this poduces: ``` TASK [docker-image : Create data container] ************************************ ok: [hostname] ``` ##### EXPECTED RESULTS Ansible reports no change. The created timestamp should be the same as the first time it was run. ##### ACTUAL RESULTS Ansible reports a change, and the created timestamp becomes recent. ",1,docker container module doesn t match existing container when entrypoint is used issue type bug report component name docker container ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux fedora summary when running a task with an entrypoint parameter a docker container is destroyed and recreated each time after the first time it is run steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used fill in the variables and run this twice name create data container docker container name docker data name image image state present entrypoint bin echo data only container for name on the second run this roduces task changed destroy that container comment out the entrypoint then run this twice name create data container docker container name docker data name image image state present entrypoint bin echo data only container for name on the second run this poduces task ok expected results ansible reports no change the created timestamp should be the same as the first time it was run actual results ansible reports a change and the created timestamp becomes recent ,1 1005,4774742004.0,IssuesEvent,2016-10-27 08:01:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,win_user module check mode always returns skipping (never changed or OK),affects_2.2 feature_idea waiting_on_maintainer windows," ##### ISSUE TYPE - Feature idea ##### COMPONENT NAME win_user module ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel c9a5b1c555) last updated 2016/06/02 15:42:56 (GMT +200) lib/ansible/modules/core: (detached HEAD ca4365b644) last updated 2016/06/02 15:43:14 (GMT +200) lib/ansible/modules/extras: (detached HEAD b0aec50b9a) last updated 2016/06/02 15:43:15 (GMT +200) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Python 2.7.5 (default, Oct 11 2015, 17:47:16) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2 ``` ##### CONFIGURATION ansible_user: xxxx ansible_pass: xxxxx ansible_connection: winrm ansible_ssh_port: 5986 ##### OS / ENVIRONMENT Ansible running in RHEL 7.2 OS remote is Windows 2012R2 with powershell 4.0 ##### SUMMARY When i execute the playbook in check mode on remote node (windows) the win_user module always return skipping, never return state: Changed or OK. ##### STEPS TO REPRODUCE ``` ANSIBLE COMMAND: ansible-playbook -i ./inv/invs ./site.yml --check ``` ``` ANSIBLE CODE: $cat site.yml - name: User Account Disabled hosts: ""{{hosts|default('all')}}"" vars_files: - group_vars/User_Account_Disabled tasks: - name: ""User Account Disabled"" win_user: name: ""User"" account_disabled: ""yes"" ``` ##### EXPECTED RESULTS When i execute the playbook in check mode on remote node (windows) with the Win account disabled, the win_user module always return skipping, never return state: Changed or OK. ##### ACTUAL RESULTS ``` PLAY [User Account Disabled] *************************************************** TASK [setup] ******************************************************************* skipping: [server.com] TASK [User_Account_Disabled : UserAccount=guest Disabled=yes] ****************** skipping: [server.com] PLAY RECAP ********************************************************************* server.com : ok=1 changed=0 unreachable=0 failed=0 ``` ",True,"win_user module check mode always returns skipping (never changed or OK) - ##### ISSUE TYPE - Feature idea ##### COMPONENT NAME win_user module ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel c9a5b1c555) last updated 2016/06/02 15:42:56 (GMT +200) lib/ansible/modules/core: (detached HEAD ca4365b644) last updated 2016/06/02 15:43:14 (GMT +200) lib/ansible/modules/extras: (detached HEAD b0aec50b9a) last updated 2016/06/02 15:43:15 (GMT +200) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Python 2.7.5 (default, Oct 11 2015, 17:47:16) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2 ``` ##### CONFIGURATION ansible_user: xxxx ansible_pass: xxxxx ansible_connection: winrm ansible_ssh_port: 5986 ##### OS / ENVIRONMENT Ansible running in RHEL 7.2 OS remote is Windows 2012R2 with powershell 4.0 ##### SUMMARY When i execute the playbook in check mode on remote node (windows) the win_user module always return skipping, never return state: Changed or OK. ##### STEPS TO REPRODUCE ``` ANSIBLE COMMAND: ansible-playbook -i ./inv/invs ./site.yml --check ``` ``` ANSIBLE CODE: $cat site.yml - name: User Account Disabled hosts: ""{{hosts|default('all')}}"" vars_files: - group_vars/User_Account_Disabled tasks: - name: ""User Account Disabled"" win_user: name: ""User"" account_disabled: ""yes"" ``` ##### EXPECTED RESULTS When i execute the playbook in check mode on remote node (windows) with the Win account disabled, the win_user module always return skipping, never return state: Changed or OK. ##### ACTUAL RESULTS ``` PLAY [User Account Disabled] *************************************************** TASK [setup] ******************************************************************* skipping: [server.com] TASK [User_Account_Disabled : UserAccount=guest Disabled=yes] ****************** skipping: [server.com] PLAY RECAP ********************************************************************* server.com : ok=1 changed=0 unreachable=0 failed=0 ``` ",1,win user module check mode always returns skipping never changed or ok issue type feature idea component name win user module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides python default oct on configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible user xxxx ansible pass xxxxx ansible connection winrm ansible ssh port os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible running in rhel os remote is windows with powershell summary when i execute the playbook in check mode on remote node windows the win user module always return skipping never return state changed or ok steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible command ansible playbook i inv invs site yml check ansible code cat site yml name user account disabled hosts hosts default all vars files group vars user account disabled tasks name user account disabled win user name user account disabled yes expected results when i execute the playbook in check mode on remote node windows with the win account disabled the win user module always return skipping never return state changed or ok actual results play task skipping task skipping play recap server com ok changed unreachable failed ,1 1029,4822882376.0,IssuesEvent,2016-11-06 03:00:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,yum module doesn't validate that RPM could be downloaded before reading,affects_2.3 bug_report in progress waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME `yum` ##### ANSIBLE VERSION ``` ansible 2.3.0 (type-filter a6feeee50f) last updated 2016/10/28 15:41:12 (GMT -500) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY The `yum` module, in `fetch_rpm_from_url` does not attempt to validate that the request made and returned by `fetch_url` was successful, and instead immediately tried to `read` from that response causing: ``` 'NoneType' object has no attribute 'read' ``` ##### STEPS TO REPRODUCE Try installing an RPM via `yum` that causes `fetch_url` to fail downloading. ##### EXPECTED RESULTS The error message returned from `fetch_url` as part of `info` should be displayed instead of trying to read from None. ##### ACTUAL RESULTS ``` fatal: [haproxy.dev]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [""https://centos7.iuscommunity.org/ius-release.rpm""], ""state"": ""present"", ""update_cache"": false, ""validate_certs"": true}, ""module_name"": ""yum""}, ""msg"": ""Failure downloading https://centos7.iuscommunity.org/ius-release.rpm, 'NoneType' object has no attribute 'read'""} ```",True,"yum module doesn't validate that RPM could be downloaded before reading - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME `yum` ##### ANSIBLE VERSION ``` ansible 2.3.0 (type-filter a6feeee50f) last updated 2016/10/28 15:41:12 (GMT -500) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY The `yum` module, in `fetch_rpm_from_url` does not attempt to validate that the request made and returned by `fetch_url` was successful, and instead immediately tried to `read` from that response causing: ``` 'NoneType' object has no attribute 'read' ``` ##### STEPS TO REPRODUCE Try installing an RPM via `yum` that causes `fetch_url` to fail downloading. ##### EXPECTED RESULTS The error message returned from `fetch_url` as part of `info` should be displayed instead of trying to read from None. ##### ACTUAL RESULTS ``` fatal: [haproxy.dev]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [""https://centos7.iuscommunity.org/ius-release.rpm""], ""state"": ""present"", ""update_cache"": false, ""validate_certs"": true}, ""module_name"": ""yum""}, ""msg"": ""Failure downloading https://centos7.iuscommunity.org/ius-release.rpm, 'NoneType' object has no attribute 'read'""} ```",1,yum module doesn t validate that rpm could be downloaded before reading issue type bug report component name yum ansible version ansible type filter last updated gmt configuration n a os environment n a summary the yum module in fetch rpm from url does not attempt to validate that the request made and returned by fetch url was successful and instead immediately tried to read from that response causing nonetype object has no attribute read steps to reproduce try installing an rpm via yum that causes fetch url to fail downloading expected results the error message returned from fetch url as part of info should be displayed instead of trying to read from none actual results fatal failed changed false failed true invocation module args conf file null disable gpg check false disablerepo null enablerepo null exclude null install repoquery true list null name state present update cache false validate certs true module name yum msg failure downloading nonetype object has no attribute read ,1 1440,6256852176.0,IssuesEvent,2017-07-14 11:29:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Syntax inconsistency in File module,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME File module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing. ##### OS / ENVIRONMENT ``` Linux redbox 4.4.5-1-ARCH #1 SMP PREEMPT Thu Mar 10 07:38:19 CET 2016 x86_64 GNU/Linux (Arch Linux) ``` ##### SUMMARY Using `mode=` and `mode:` syntax should have the same result. But `mode: 2775` gives a really weird outcome compared `mode=2775`'s result, which is correct. ##### STEPS TO REPRODUCE ``` - name: Create sites directory file: path: /opt/sites state: directory group: coopdevs owner: ubuntu mode: 2775 ``` ##### EXPECTED RESULTS The following is the expected result, obtained using `mode=` syntax: ``` $ ls -l /opt/ total 4 drwxrwsr-x 3 ubuntu coopdevs 4096 Apr 8 14:58 sites ``` ##### ACTUAL RESULTS This is the result using `mode:` syntax: ``` $ ls -l /opt/ total 4 d-ws-w-rwt 3 ubuntu coopdevs 4096 Apr 8 14:58 sites ``` ##### HINTS 1. It seems related to the first digit `2`, in the documentation it only talks about `0`. And actually `0775` works fine. 2. It seems that wrapping the digits with quotes it actually works, like `mode: '2775'`. ",True,"Syntax inconsistency in File module - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME File module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing. ##### OS / ENVIRONMENT ``` Linux redbox 4.4.5-1-ARCH #1 SMP PREEMPT Thu Mar 10 07:38:19 CET 2016 x86_64 GNU/Linux (Arch Linux) ``` ##### SUMMARY Using `mode=` and `mode:` syntax should have the same result. But `mode: 2775` gives a really weird outcome compared `mode=2775`'s result, which is correct. ##### STEPS TO REPRODUCE ``` - name: Create sites directory file: path: /opt/sites state: directory group: coopdevs owner: ubuntu mode: 2775 ``` ##### EXPECTED RESULTS The following is the expected result, obtained using `mode=` syntax: ``` $ ls -l /opt/ total 4 drwxrwsr-x 3 ubuntu coopdevs 4096 Apr 8 14:58 sites ``` ##### ACTUAL RESULTS This is the result using `mode:` syntax: ``` $ ls -l /opt/ total 4 d-ws-w-rwt 3 ubuntu coopdevs 4096 Apr 8 14:58 sites ``` ##### HINTS 1. It seems related to the first digit `2`, in the documentation it only talks about `0`. And actually `0775` works fine. 2. It seems that wrapping the digits with quotes it actually works, like `mode: '2775'`. ",1,syntax inconsistency in file module issue type bug report component name file module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables nothing os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux redbox arch smp preempt thu mar cet gnu linux arch linux summary using mode and mode syntax should have the same result but mode gives a really weird outcome compared mode s result which is correct steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create sites directory file path opt sites state directory group coopdevs owner ubuntu mode expected results the following is the expected result obtained using mode syntax ls l opt total drwxrwsr x ubuntu coopdevs apr sites actual results this is the result using mode syntax ls l opt total d ws w rwt ubuntu coopdevs apr sites hints it seems related to the first digit in the documentation it only talks about and actually works fine it seems that wrapping the digits with quotes it actually works like mode ,1 1705,6574415970.0,IssuesEvent,2017-09-11 12:49:06,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service Unable to load docker-compose.,affects_2.2 bug_report cloud docker waiting_on_maintainer,"The symptoms are the same as 3906, which is closed as resolved. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = configured module search path = Default w/o overrides $ docker-compose --version docker-compose version 1.8.0, build f3628c7 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OS-X 10.11.16 (executing playbook here) host, running on virtualbox: Distributor ID: Ubuntu Description: Ubuntu 16.04.1 LTS Release: 16.04 Codename: xenial ##### SUMMARY ""Unable to load docker-compose. Try `pip install docker-compose`. Error: cannot import name build_action_from_opts"" The results seems to be the same as 3906, which I believe is included in the 2.2.0 release. ##### STEPS TO REPRODUCE added docker_service section to playbook, ran playbook on OS-X, host is Ubuntu 16.04 ``` - name: copy compose file to remote server copy: force: yes src: ""{{ local_path }}"" dest: ""{{ remote_path }}{{ remote_file }}"" - name: docker-compose via ansible docker_service docker_service: files: - ""{{ remote_file }}"" project_src: ""{{ remote_path }}"" project_name: ""bbs-services"" pull: true state: present restarted: true ``` ##### EXPECTED RESULTS PLAY [local-services] ********************************************************** TASK [setup] ******************************************************************* ok: [192.168.67.25] TASK [copy compose file to remote server] ************************************** ok: [192.168.67.25] TASK [docker-compose via ansible docker_service] ********************* ok: [192.168.67.25] PLAY RECAP ********************************************************************* 192.168.67.25 : ok=3 changed=0 unreachable=0 failed=0 ##### ACTUAL RESULTS partial result (failed task only) ``` TASK [docker-compose via ansible docker_service] ********************* task path: /Users/eric.anderson/projects/stash.innitrode.com/SFB/bbs-services/ansible/servers.yaml:52 Using module file /Users/eric.anderson/projects/virtualenvs/ansible/lib/python2.7/site-packages/ansible/modules/core/cloud/docker/docker_service.py <192.168.67.25> ESTABLISH SSH CONNECTION FOR USER: billing <192.168.67.25> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.67.25 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346 `"" && echo ansible-tmp-1478553260.5-110070586491346=""` echo $HOME/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346 `"" ) && sleep 0'""'""'' <192.168.67.25> PUT /var/folders/nb/6r1j2wjn5qxb_bbqj31nmx7rncv064/T/tmpXfl3DW TO /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/docker_service.py <192.168.67.25> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.67.25]' <192.168.67.25> ESTABLISH SSH CONNECTION FOR USER: billing <192.168.67.25> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.67.25 '/bin/sh -c '""'""'chmod u+x /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/ /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/docker_service.py && sleep 0'""'""'' <192.168.67.25> ESTABLISH SSH CONNECTION FOR USER: billing <192.168.67.25> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.67.25 '/bin/sh -c '""'""'/usr/bin/python /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/docker_service.py && sleep 0'""'""'' fatal: [192.168.67.25]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": null, ""dependencies"": true, ""docker_host"": null, ""files"": [ ""bbs-services-local.yml"" ], ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": ""bbs-services"", ""project_src"": ""/home/billing/"", ""pull"": true, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": true, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 10, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Unable to load docker-compose. Try `pip install docker-compose`. Error: cannot import name build_action_from_opts"" } to retry, use: --limit @/Users/eric.anderson/projects/stash.innitrode.com/SFB/bbs-services/ansible/servers.retry PLAY RECAP ********************************************************************* 192.168.67.25 : ok=2 changed=0 unreachable=0 failed=1 ``` ",True,"docker_service Unable to load docker-compose. - The symptoms are the same as 3906, which is closed as resolved. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = configured module search path = Default w/o overrides $ docker-compose --version docker-compose version 1.8.0, build f3628c7 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OS-X 10.11.16 (executing playbook here) host, running on virtualbox: Distributor ID: Ubuntu Description: Ubuntu 16.04.1 LTS Release: 16.04 Codename: xenial ##### SUMMARY ""Unable to load docker-compose. Try `pip install docker-compose`. Error: cannot import name build_action_from_opts"" The results seems to be the same as 3906, which I believe is included in the 2.2.0 release. ##### STEPS TO REPRODUCE added docker_service section to playbook, ran playbook on OS-X, host is Ubuntu 16.04 ``` - name: copy compose file to remote server copy: force: yes src: ""{{ local_path }}"" dest: ""{{ remote_path }}{{ remote_file }}"" - name: docker-compose via ansible docker_service docker_service: files: - ""{{ remote_file }}"" project_src: ""{{ remote_path }}"" project_name: ""bbs-services"" pull: true state: present restarted: true ``` ##### EXPECTED RESULTS PLAY [local-services] ********************************************************** TASK [setup] ******************************************************************* ok: [192.168.67.25] TASK [copy compose file to remote server] ************************************** ok: [192.168.67.25] TASK [docker-compose via ansible docker_service] ********************* ok: [192.168.67.25] PLAY RECAP ********************************************************************* 192.168.67.25 : ok=3 changed=0 unreachable=0 failed=0 ##### ACTUAL RESULTS partial result (failed task only) ``` TASK [docker-compose via ansible docker_service] ********************* task path: /Users/eric.anderson/projects/stash.innitrode.com/SFB/bbs-services/ansible/servers.yaml:52 Using module file /Users/eric.anderson/projects/virtualenvs/ansible/lib/python2.7/site-packages/ansible/modules/core/cloud/docker/docker_service.py <192.168.67.25> ESTABLISH SSH CONNECTION FOR USER: billing <192.168.67.25> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.67.25 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346 `"" && echo ansible-tmp-1478553260.5-110070586491346=""` echo $HOME/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346 `"" ) && sleep 0'""'""'' <192.168.67.25> PUT /var/folders/nb/6r1j2wjn5qxb_bbqj31nmx7rncv064/T/tmpXfl3DW TO /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/docker_service.py <192.168.67.25> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.67.25]' <192.168.67.25> ESTABLISH SSH CONNECTION FOR USER: billing <192.168.67.25> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.67.25 '/bin/sh -c '""'""'chmod u+x /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/ /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/docker_service.py && sleep 0'""'""'' <192.168.67.25> ESTABLISH SSH CONNECTION FOR USER: billing <192.168.67.25> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=billing -o ConnectTimeout=10 -o ControlPath=/Users/eric.anderson/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.67.25 '/bin/sh -c '""'""'/usr/bin/python /home/billing/.ansible/tmp/ansible-tmp-1478553260.5-110070586491346/docker_service.py && sleep 0'""'""'' fatal: [192.168.67.25]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": null, ""dependencies"": true, ""docker_host"": null, ""files"": [ ""bbs-services-local.yml"" ], ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": ""bbs-services"", ""project_src"": ""/home/billing/"", ""pull"": true, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": true, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 10, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Unable to load docker-compose. Try `pip install docker-compose`. Error: cannot import name build_action_from_opts"" } to retry, use: --limit @/Users/eric.anderson/projects/stash.innitrode.com/SFB/bbs-services/ansible/servers.retry PLAY RECAP ********************************************************************* 192.168.67.25 : ok=2 changed=0 unreachable=0 failed=1 ``` ",1,docker service unable to load docker compose the symptoms are the same as which is closed as resolved issue type bug report component name docker service ansible version ansible version ansible config file configured module search path default w o overrides docker compose version docker compose version build configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment os x executing playbook here host running on virtualbox distributor id ubuntu description ubuntu lts release codename xenial summary unable to load docker compose try pip install docker compose error cannot import name build action from opts the results seems to be the same as which i believe is included in the release steps to reproduce added docker service section to playbook ran playbook on os x host is ubuntu name copy compose file to remote server copy force yes src local path dest remote path remote file name docker compose via ansible docker service docker service files remote file project src remote path project name bbs services pull true state present restarted true expected results play task ok task ok task ok play recap ok changed unreachable failed actual results partial result failed task only task task path users eric anderson projects stash innitrode com sfb bbs services ansible servers yaml using module file users eric anderson projects virtualenvs ansible lib site packages ansible modules core cloud docker docker service py establish ssh connection for user billing ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user billing o connecttimeout o controlpath users eric anderson ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders nb t to home billing ansible tmp ansible tmp docker service py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user billing o connecttimeout o controlpath users eric anderson ansible cp ansible ssh h p r establish ssh connection for user billing ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user billing o connecttimeout o controlpath users eric anderson ansible cp ansible ssh h p r bin sh c chmod u x home billing ansible tmp ansible tmp home billing ansible tmp ansible tmp docker service py sleep establish ssh connection for user billing ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user billing o connecttimeout o controlpath users eric anderson ansible cp ansible ssh h p r tt bin sh c usr bin python home billing ansible tmp ansible tmp docker service py sleep fatal failed changed false failed true invocation module args api version null build false cacert path null cert path null debug false definition null dependencies true docker host null files bbs services local yml filter logger false hostname check false key path null nocache false project name bbs services project src home billing pull true recreate smart remove images null remove orphans false remove volumes false restarted true scale null services null ssl version null state present stopped false timeout tls null tls hostname null tls verify null module name docker service msg unable to load docker compose try pip install docker compose error cannot import name build action from opts to retry use limit users eric anderson projects stash innitrode com sfb bbs services ansible servers retry play recap ok changed unreachable failed ,1 917,4621818704.0,IssuesEvent,2016-09-27 03:49:28,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container.py fails on second run if devices are configured,affects_2.2 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing special ##### OS / ENVIRONMENT ubuntu ##### SUMMARY When you have devices configured in your playbook, e.g. devices: - /dev/net/tun - /dev/kvm Then a second run through of a playbook will fail. Details below. ##### STEPS TO REPRODUCE Create a container with devices configured. Then rerun the same playbook. ##### EXPECTED RESULTS Nothing should change, let alone fail. ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1963, in main() File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1956, in main cm = ContainerManager(client) File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1609, in __init__ self.present(state) File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1634, in present different, differences = container.has_different_configuration(image) File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1253, in has_different_configuration set_b = set(value) TypeError: unhashable type: 'dict' failed: [localhost] (item={u'pa': None, u'container_name': u'pcontainer', u'container_version': u'latest', u'name': u'pa'}) => { ""failed"": true, ""invocation"": { ""module_name"": ""docker_container"" }, ""item"": { ""container_name"": ""pcontainer"", ""container_version"": ""latest"", ""name"": ""pa"", ""pa"": null }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1963, in \n main()\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1956, in main\n cm = ContainerManager(client)\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1609, in __init__\n self.present(state)\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1634, in present\n different, differences = container.has_different_configuration(image)\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1253, in has_different_configuration\n set_b = set(value)\nTypeError: unhashable type: 'dict'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` Note that the reason is because the docker inspect version of devices is different from the way devices are configured. Devices are configured with a list of strings, each string like ""/dev/on/host:/dev/in/container:rwm"" where rwm represents permissions. Docker inspect returns something like this: ``` ""Devices"": [ { ""CgroupPermissions"": ""rwm"", ""PathInContainer"": ""/dev/net/tun"", ""PathOnHost"": ""/dev/net/tun"" }, { ""CgroupPermissions"": ""rwm"", ""PathInContainer"": ""/dev/kvm"", ""PathOnHost"": ""/dev/kvm"" } ], ``` I have managed to make it work with this patch to lib/ansible/modules/core/cloud/docker/docker_container.py ``` 1203c1203 < devices=host_config.get('Devices'), --- > devices=[':'.join([ t[p] for p in ['PathOnHost','PathInContainer' if t['PathInContainer'] != t['PathOnHost'] else None ,'CgroupPermissions' if t['CgroupPermissions'] != 'rwm' else None] if p is not None]) for t in host_config.get('Devices')] if host_config.get('Devices') else None, ``` It looks a bit convoluted, but I wanted to keep it in the same inline one liner theme of the surrounding code. Here's the explanation: Starting at the ""for p in []"", the list p iterates over is a specified list of strings, in the correct order, that corresponds to the keys that are in the inspect hash. However, the string is replaced with None if it shouldn't be displayed within the configuration string. E.g. if it's default permissions (rwm), or if the host and container have the same path, then the configuration string would not show these values, so replace them with None. Then a list of the values from the device config hash, corresponding to the listed key is generated, but a None key is ignored with ""if p is not None"". So at this stage, we would have something like [""/dev/on/host"",""/dev/in/container"",""rw"" ], or [""/dev/on/host"",None,""rw""] and then that list is joined together with a "":"" to form the correct string. This is repeated for every device hash, setting t to that hash and the result is compiled into a list of configuration strings, just like what would be set in the .yml file. Finally, it just returns None if there is no device config. ",True,"docker_container.py fails on second run if devices are configured - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing special ##### OS / ENVIRONMENT ubuntu ##### SUMMARY When you have devices configured in your playbook, e.g. devices: - /dev/net/tun - /dev/kvm Then a second run through of a playbook will fail. Details below. ##### STEPS TO REPRODUCE Create a container with devices configured. Then rerun the same playbook. ##### EXPECTED RESULTS Nothing should change, let alone fail. ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1963, in main() File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1956, in main cm = ContainerManager(client) File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1609, in __init__ self.present(state) File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1634, in present different, differences = container.has_different_configuration(image) File ""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py"", line 1253, in has_different_configuration set_b = set(value) TypeError: unhashable type: 'dict' failed: [localhost] (item={u'pa': None, u'container_name': u'pcontainer', u'container_version': u'latest', u'name': u'pa'}) => { ""failed"": true, ""invocation"": { ""module_name"": ""docker_container"" }, ""item"": { ""container_name"": ""pcontainer"", ""container_version"": ""latest"", ""name"": ""pa"", ""pa"": null }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1963, in \n main()\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1956, in main\n cm = ContainerManager(client)\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1609, in __init__\n self.present(state)\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1634, in present\n different, differences = container.has_different_configuration(image)\n File \""/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\"", line 1253, in has_different_configuration\n set_b = set(value)\nTypeError: unhashable type: 'dict'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` Note that the reason is because the docker inspect version of devices is different from the way devices are configured. Devices are configured with a list of strings, each string like ""/dev/on/host:/dev/in/container:rwm"" where rwm represents permissions. Docker inspect returns something like this: ``` ""Devices"": [ { ""CgroupPermissions"": ""rwm"", ""PathInContainer"": ""/dev/net/tun"", ""PathOnHost"": ""/dev/net/tun"" }, { ""CgroupPermissions"": ""rwm"", ""PathInContainer"": ""/dev/kvm"", ""PathOnHost"": ""/dev/kvm"" } ], ``` I have managed to make it work with this patch to lib/ansible/modules/core/cloud/docker/docker_container.py ``` 1203c1203 < devices=host_config.get('Devices'), --- > devices=[':'.join([ t[p] for p in ['PathOnHost','PathInContainer' if t['PathInContainer'] != t['PathOnHost'] else None ,'CgroupPermissions' if t['CgroupPermissions'] != 'rwm' else None] if p is not None]) for t in host_config.get('Devices')] if host_config.get('Devices') else None, ``` It looks a bit convoluted, but I wanted to keep it in the same inline one liner theme of the surrounding code. Here's the explanation: Starting at the ""for p in []"", the list p iterates over is a specified list of strings, in the correct order, that corresponds to the keys that are in the inspect hash. However, the string is replaced with None if it shouldn't be displayed within the configuration string. E.g. if it's default permissions (rwm), or if the host and container have the same path, then the configuration string would not show these values, so replace them with None. Then a list of the values from the device config hash, corresponding to the listed key is generated, but a None key is ignored with ""if p is not None"". So at this stage, we would have something like [""/dev/on/host"",""/dev/in/container"",""rw"" ], or [""/dev/on/host"",None,""rw""] and then that list is joined together with a "":"" to form the correct string. This is repeated for every device hash, setting t to that hash and the result is compiled into a list of configuration strings, just like what would be set in the .yml file. Finally, it just returns None if there is no device config. ",1,docker container py fails on second run if devices are configured issue type bug report component name docker container ansible version ansible config file configured module search path default w o overrides configuration nothing special os environment ubuntu summary when you have devices configured in your playbook e g devices dev net tun dev kvm then a second run through of a playbook will fail details below steps to reproduce create a container with devices configured then rerun the same playbook expected results nothing should change let alone fail actual results an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module docker container py line in main file tmp ansible ansible module docker container py line in main cm containermanager client file tmp ansible ansible module docker container py line in init self present state file tmp ansible ansible module docker container py line in present different differences container has different configuration image file tmp ansible ansible module docker container py line in has different configuration set b set value typeerror unhashable type dict failed item u pa none u container name u pcontainer u container version u latest u name u pa failed true invocation module name docker container item container name pcontainer container version latest name pa pa null module stderr traceback most recent call last n file tmp ansible ansible module docker container py line in n main n file tmp ansible ansible module docker container py line in main n cm containermanager client n file tmp ansible ansible module docker container py line in init n self present state n file tmp ansible ansible module docker container py line in present n different differences container has different configuration image n file tmp ansible ansible module docker container py line in has different configuration n set b set value ntypeerror unhashable type dict n module stdout msg module failure note that the reason is because the docker inspect version of devices is different from the way devices are configured devices are configured with a list of strings each string like dev on host dev in container rwm where rwm represents permissions docker inspect returns something like this devices cgrouppermissions rwm pathincontainer dev net tun pathonhost dev net tun cgrouppermissions rwm pathincontainer dev kvm pathonhost dev kvm i have managed to make it work with this patch to lib ansible modules core cloud docker docker container py devices host config get devices devices for p in t else none cgrouppermissions if t rwm else none if p is not none for t in host config get devices if host config get devices else none it looks a bit convoluted but i wanted to keep it in the same inline one liner theme of the surrounding code here s the explanation starting at the for p in the list p iterates over is a specified list of strings in the correct order that corresponds to the keys that are in the inspect hash however the string is replaced with none if it shouldn t be displayed within the configuration string e g if it s default permissions rwm or if the host and container have the same path then the configuration string would not show these values so replace them with none then a list of the values from the device config hash corresponding to the listed key is generated but a none key is ignored with if p is not none so at this stage we would have something like or and then that list is joined together with a to form the correct string this is repeated for every device hash setting t to that hash and the result is compiled into a list of configuration strings just like what would be set in the yml file finally it just returns none if there is no device config ,1 1681,6574141818.0,IssuesEvent,2017-09-11 11:40:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Failure when adding a program using supervisorctl which disables supervisorctl on managed host,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME supervisorctl ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/mike/installer configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] host_key_checking = False private_key_file = /home/mike/.ssh/id_rsa retry_files_enabled = False [ssh_connection] ssh_args = -o ControlPersist=60s -q scp_if_ssh = True ##### OS / ENVIRONMENT OS running Ansible: Linux OS being managed: Linux ##### SUMMARY - I have a program _foobar_ that exists in Git. - supervisord is currently running _foobar_ on my managed machine. - I have written a playbook that checks out the code for _foobar_ from Git, builds it on the Ansible machine, rsync's it to my managed machine (along side other supervisord managed programs). This all works well. - Here's the part where I suffer **occasional** and **intermittent** failures. My playbook attempts to reload _foobar_ under supervisord using the supervisorctl module. - 90% - 95% of the time, the program is reloaded and restarted correctly, and Ansible does not report any failures. - However, on occasion, Ansible will fail: ``` fatal: [example.com]: FAILED! => {""changed"": false, ""cmd"": ""usr/bin/supervisorctl -c /home/mike/services/supervisord.conf reread"", ""failed"": true, ""msg"": """", ""rc"": 2, ""stderr"": """", ""stdout"": ""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567 n"", """"stdout_lines"": [""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567""]} ``` - When I log onto the managed host, using the `ps` command, I can see that supervisord is still running and that all of the managed Java programs are still running, but I can no longer run `supervisorctl -c supervisord.conf -i` on the managed machine (as it returns _refused connection_). - Additionally, it appears that the programs being managed under supervisord are restarting infinitely as their PID values continue increasing. - Once I `kill` the supervisord process, the managed programs cease restarting and terminate completely. - This essentially means every once and a while, when I have a failure like this, I have to log on to each host, kill supervisord, and restart it again. After I've done that, the deployments and subsequent Ansible supervisorctl commands continue working again for an arbitrary period of time. ##### STEPS TO REPRODUCE - Start a supervisord managed program. - Attempt to start and stop the service from the Ansible host. Here are the commands I'm using from the Ansible side: ``` # This is where the failure occurs. - name: add programs under the group name foobar to supervisord if necessary supervisorctl: name=""foobar:"" state=present config=""/home/mike/services/supervisord.conf"" - name: restart programs under the group name foobar supervisorctl: name=""foobar:"" state=restarted config=""/home/mike/services/supervisord.conf"" ``` ##### EXPECTED RESULTS The supervisord managed service should either: - If it is currently running under supervisord, reload the service, and restart it. - If it is not running under supervisord, add the service to supervisord, and start it. ##### ACTUAL RESULTS As I mentioned previously, it appears that supervisorctl can no longer communicate with the supervisord instance running on the managed host and it is either caused by or related to the exception that Ansible reports : ``` fatal: [example.com]: FAILED! => {""changed"": false, ""cmd"": ""usr/bin/supervisorctl -c /home/mike/services/supervisord.conf reread"", ""failed"": true, ""msg"": """", ""rc"": 2, ""stderr"": """", ""stdout"": ""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567 n"", """"stdout_lines"": [""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567""]} ```",True,"Failure when adding a program using supervisorctl which disables supervisorctl on managed host - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME supervisorctl ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/mike/installer configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] host_key_checking = False private_key_file = /home/mike/.ssh/id_rsa retry_files_enabled = False [ssh_connection] ssh_args = -o ControlPersist=60s -q scp_if_ssh = True ##### OS / ENVIRONMENT OS running Ansible: Linux OS being managed: Linux ##### SUMMARY - I have a program _foobar_ that exists in Git. - supervisord is currently running _foobar_ on my managed machine. - I have written a playbook that checks out the code for _foobar_ from Git, builds it on the Ansible machine, rsync's it to my managed machine (along side other supervisord managed programs). This all works well. - Here's the part where I suffer **occasional** and **intermittent** failures. My playbook attempts to reload _foobar_ under supervisord using the supervisorctl module. - 90% - 95% of the time, the program is reloaded and restarted correctly, and Ansible does not report any failures. - However, on occasion, Ansible will fail: ``` fatal: [example.com]: FAILED! => {""changed"": false, ""cmd"": ""usr/bin/supervisorctl -c /home/mike/services/supervisord.conf reread"", ""failed"": true, ""msg"": """", ""rc"": 2, ""stderr"": """", ""stdout"": ""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567 n"", """"stdout_lines"": [""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567""]} ``` - When I log onto the managed host, using the `ps` command, I can see that supervisord is still running and that all of the managed Java programs are still running, but I can no longer run `supervisorctl -c supervisord.conf -i` on the managed machine (as it returns _refused connection_). - Additionally, it appears that the programs being managed under supervisord are restarting infinitely as their PID values continue increasing. - Once I `kill` the supervisord process, the managed programs cease restarting and terminate completely. - This essentially means every once and a while, when I have a failure like this, I have to log on to each host, kill supervisord, and restart it again. After I've done that, the deployments and subsequent Ansible supervisorctl commands continue working again for an arbitrary period of time. ##### STEPS TO REPRODUCE - Start a supervisord managed program. - Attempt to start and stop the service from the Ansible host. Here are the commands I'm using from the Ansible side: ``` # This is where the failure occurs. - name: add programs under the group name foobar to supervisord if necessary supervisorctl: name=""foobar:"" state=present config=""/home/mike/services/supervisord.conf"" - name: restart programs under the group name foobar supervisorctl: name=""foobar:"" state=restarted config=""/home/mike/services/supervisord.conf"" ``` ##### EXPECTED RESULTS The supervisord managed service should either: - If it is currently running under supervisord, reload the service, and restart it. - If it is not running under supervisord, add the service to supervisord, and start it. ##### ACTUAL RESULTS As I mentioned previously, it appears that supervisorctl can no longer communicate with the supervisord instance running on the managed host and it is either caused by or related to the exception that Ansible reports : ``` fatal: [example.com]: FAILED! => {""changed"": false, ""cmd"": ""usr/bin/supervisorctl -c /home/mike/services/supervisord.conf reread"", ""failed"": true, ""msg"": """", ""rc"": 2, ""stderr"": """", ""stdout"": ""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567 n"", """"stdout_lines"": [""error: , [Errno 111] Connection refused: file: /usr/lib64/python2.6/socket.py line: 567""]} ```",1,failure when adding a program using supervisorctl which disables supervisorctl on managed host issue type bug report component name supervisorctl ansible version ansible config file home mike installer configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables host key checking false private key file home mike ssh id rsa retry files enabled false ssh args o controlpersist q scp if ssh true os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific os running ansible linux os being managed linux summary i have a program foobar that exists in git supervisord is currently running foobar on my managed machine i have written a playbook that checks out the code for foobar from git builds it on the ansible machine rsync s it to my managed machine along side other supervisord managed programs this all works well here s the part where i suffer occasional and intermittent failures my playbook attempts to reload foobar under supervisord using the supervisorctl module of the time the program is reloaded and restarted correctly and ansible does not report any failures however on occasion ansible will fail fatal failed changed false cmd usr bin supervisorctl c home mike services supervisord conf reread failed true msg rc stderr stdout error connection refused file usr socket py line n stdout lines connection refused file usr socket py line when i log onto the managed host using the ps command i can see that supervisord is still running and that all of the managed java programs are still running but i can no longer run supervisorctl c supervisord conf i on the managed machine as it returns refused connection additionally it appears that the programs being managed under supervisord are restarting infinitely as their pid values continue increasing once i kill the supervisord process the managed programs cease restarting and terminate completely this essentially means every once and a while when i have a failure like this i have to log on to each host kill supervisord and restart it again after i ve done that the deployments and subsequent ansible supervisorctl commands continue working again for an arbitrary period of time steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used start a supervisord managed program attempt to start and stop the service from the ansible host here are the commands i m using from the ansible side this is where the failure occurs name add programs under the group name foobar to supervisord if necessary supervisorctl name foobar state present config home mike services supervisord conf name restart programs under the group name foobar supervisorctl name foobar state restarted config home mike services supervisord conf expected results the supervisord managed service should either if it is currently running under supervisord reload the service and restart it if it is not running under supervisord add the service to supervisord and start it actual results as i mentioned previously it appears that supervisorctl can no longer communicate with the supervisord instance running on the managed host and it is either caused by or related to the exception that ansible reports fatal failed changed false cmd usr bin supervisorctl c home mike services supervisord conf reread failed true msg rc stderr stdout error connection refused file usr socket py line n stdout lines connection refused file usr socket py line ,1 1448,6287561281.0,IssuesEvent,2017-07-19 15:11:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,IAM module: Managed policies,affects_2.3 aws cloud feature_idea waiting_on_maintainer,"**Issue Type:** Feature Idea **Summary:** Support managed policies. Currently, only inline policies are supported, making it impossible to link a policy to several groups/users/roles. I reckon in the meantime this limitation could be made clearer in the doc. ",True,"IAM module: Managed policies - **Issue Type:** Feature Idea **Summary:** Support managed policies. Currently, only inline policies are supported, making it impossible to link a policy to several groups/users/roles. I reckon in the meantime this limitation could be made clearer in the doc. ",1,iam module managed policies issue type feature idea summary support managed policies currently only inline policies are supported making it impossible to link a policy to several groups users roles i reckon in the meantime this limitation could be made clearer in the doc ,1 1689,6574167726.0,IssuesEvent,2017-09-11 11:47:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,nxos_command paramiko.ssh_exception.BadHostKeyException bad key exception failure,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_command ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] hostfile=localstage #hostfile=mas-b43 ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/masd-rsa host_key_checking=False ``` ##### OS / ENVIRONMENT ##### SUMMARY An updated switch configuration is sent to the switch. The switch get's reloaded. Post reload the ssh server key has changed on the switch. The public key that is stored in known_hosts is no longer valid. paramiko throws an error and fails. in the Ansible config file, the host_key_checking=False is set. ##### STEPS TO REPRODUCE ``` --- - name: Delete remote config nxos_command: provider: ""{{ cli }}"" host: ""{{ ansible_host }}"" commands: - ""delete bootflash:{{ inventory_hostname }}.conf "" ``` ##### EXPECTED RESULTS ignore ssh key change and establish a connection. ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_sespbZ/ansible_module_nxos_command.py"", line 257, in main() File ""/tmp/ansible_sespbZ/ansible_module_nxos_command.py"", line 193, in main supports_check_mode=True) File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py"", line 112, in __init__ File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/nxos.py"", line 266, in connect File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py"", line 96, in open File ""/usr/lib/python2.7/dist-packages/paramiko/client.py"", line 353, in connect raise BadHostKeyException(hostname, server_key, our_server_key) paramiko.ssh_exception.BadHostKeyException: ('10.10.230.12', , ) fatal: [rr1-n35-r10-x32sp-2a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_sespbZ/ansible_module_nxos_command.py\"", line 257, in \n main()\n File \""/tmp/ansible_sespbZ/ansible_module_nxos_command.py\"", line 193, in main\n supports_check_mode=True)\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py\"", line 112, in __init__\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/nxos.py\"", line 266, in connect\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 96, in open\n File \""/usr/lib/python2.7/dist-packages/paramiko/client.py\"", line 353, in connect\n raise BadHostKeyException(hostname, server_key, our_server_key)\nparamiko.ssh_exception.BadHostKeyException: ('10.10.230.12', , )\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ssh response to the key ``` emarq@rr1masdansible:~/Solutions.Network.Automation/MAS/Ansible/cisco/nexus$ ssh admin@10.10.230.13 -vvvvv OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016 debug1: Reading configuration data /home/emarq/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: resolving ""10.10.230.13"" port 22 debug2: ssh_connect_direct: needpriv 0 debug1: Connecting to 10.10.230.13 [10.10.230.13] port 22. debug1: Connection established. debug1: identity file /home/emarq/.ssh/masd-rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /home/emarq/.ssh/masd-rsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1 debug1: ssh_exchange_identification: dcos_sshd run in non-FIPS mode debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 FIPS debug1: match: OpenSSH_6.2 FIPS pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 10.10.230.13:22 as 'admin' debug3: hostkeys_foreach: reading file ""/home/emarq/.ssh/known_hosts"" debug3: record_hostkey: found key type RSA in file /home/emarq/.ssh/known_hosts:12 debug3: load_hostkeys: loaded 1 keys from 10.10.230.13 debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-rsa-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256,ssh-rsa debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,ext-info-c debug2: host key algorithms: ssh-rsa-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519 debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,zlib@openssh.com,zlib debug2: compression stoc: none,zlib@openssh.com,zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: host key algorithms: ssh-rsa debug2: ciphers ctos: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr debug2: ciphers stoc: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr debug2: MACs ctos: hmac-sha1 debug2: MACs stoc: hmac-sha1 debug2: compression ctos: none,zlib@openssh.com debug2: compression stoc: none,zlib@openssh.com debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: diffie-hellman-group-exchange-sha256 debug1: kex: host key algorithm: ssh-rsa debug1: kex: server->client cipher: aes128-ctr MAC: hmac-sha1 compression: none debug1: kex: client->server cipher: aes128-ctr MAC: hmac-sha1 compression: none debug3: send packet: type 34 debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(2048<7680<8192) sent debug3: receive packet: type 31 debug1: got SSH2_MSG_KEX_DH_GEX_GROUP debug2: bits set: 1017/2048 debug3: send packet: type 32 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug3: receive packet: type 33 debug1: got SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: ssh-rsa SHA256:KwU71WtBGcBXk4XddRCstz7qTsTmPFxOPISEmDq1mN8 debug3: hostkeys_foreach: reading file ""/home/emarq/.ssh/known_hosts"" debug3: record_hostkey: found key type RSA in file /home/emarq/.ssh/known_hosts:12 debug3: load_hostkeys: loaded 1 keys from 10.10.230.13 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the RSA key sent by the remote host is SHA256:KwU71WtBGcBXk4XddRCstz7qTsTmPFxOPISEmDq1mN8. Please contact your system administrator. Add correct host key in /home/emarq/.ssh/known_hosts to get rid of this message. Offending RSA key in /home/emarq/.ssh/known_hosts:12 remove with: ssh-keygen -f ""/home/emarq/.ssh/known_hosts"" -R 10.10.230.13 RSA host key for 10.10.230.13 has changed and you have requested strict checking. Host key verification failed. ```",True,"nxos_command paramiko.ssh_exception.BadHostKeyException bad key exception failure - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_command ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] hostfile=localstage #hostfile=mas-b43 ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/masd-rsa host_key_checking=False ``` ##### OS / ENVIRONMENT ##### SUMMARY An updated switch configuration is sent to the switch. The switch get's reloaded. Post reload the ssh server key has changed on the switch. The public key that is stored in known_hosts is no longer valid. paramiko throws an error and fails. in the Ansible config file, the host_key_checking=False is set. ##### STEPS TO REPRODUCE ``` --- - name: Delete remote config nxos_command: provider: ""{{ cli }}"" host: ""{{ ansible_host }}"" commands: - ""delete bootflash:{{ inventory_hostname }}.conf "" ``` ##### EXPECTED RESULTS ignore ssh key change and establish a connection. ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_sespbZ/ansible_module_nxos_command.py"", line 257, in main() File ""/tmp/ansible_sespbZ/ansible_module_nxos_command.py"", line 193, in main supports_check_mode=True) File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py"", line 112, in __init__ File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py"", line 148, in connect File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/nxos.py"", line 266, in connect File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py"", line 226, in connect File ""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py"", line 96, in open File ""/usr/lib/python2.7/dist-packages/paramiko/client.py"", line 353, in connect raise BadHostKeyException(hostname, server_key, our_server_key) paramiko.ssh_exception.BadHostKeyException: ('10.10.230.12', , ) fatal: [rr1-n35-r10-x32sp-2a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_sespbZ/ansible_module_nxos_command.py\"", line 257, in \n main()\n File \""/tmp/ansible_sespbZ/ansible_module_nxos_command.py\"", line 193, in main\n supports_check_mode=True)\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py\"", line 112, in __init__\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/nxos.py\"", line 266, in connect\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_sespbZ/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 96, in open\n File \""/usr/lib/python2.7/dist-packages/paramiko/client.py\"", line 353, in connect\n raise BadHostKeyException(hostname, server_key, our_server_key)\nparamiko.ssh_exception.BadHostKeyException: ('10.10.230.12', , )\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ssh response to the key ``` emarq@rr1masdansible:~/Solutions.Network.Automation/MAS/Ansible/cisco/nexus$ ssh admin@10.10.230.13 -vvvvv OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016 debug1: Reading configuration data /home/emarq/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: resolving ""10.10.230.13"" port 22 debug2: ssh_connect_direct: needpriv 0 debug1: Connecting to 10.10.230.13 [10.10.230.13] port 22. debug1: Connection established. debug1: identity file /home/emarq/.ssh/masd-rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /home/emarq/.ssh/masd-rsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1 debug1: ssh_exchange_identification: dcos_sshd run in non-FIPS mode debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 FIPS debug1: match: OpenSSH_6.2 FIPS pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 10.10.230.13:22 as 'admin' debug3: hostkeys_foreach: reading file ""/home/emarq/.ssh/known_hosts"" debug3: record_hostkey: found key type RSA in file /home/emarq/.ssh/known_hosts:12 debug3: load_hostkeys: loaded 1 keys from 10.10.230.13 debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-rsa-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256,ssh-rsa debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,ext-info-c debug2: host key algorithms: ssh-rsa-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519 debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,zlib@openssh.com,zlib debug2: compression stoc: none,zlib@openssh.com,zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: host key algorithms: ssh-rsa debug2: ciphers ctos: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr debug2: ciphers stoc: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr debug2: MACs ctos: hmac-sha1 debug2: MACs stoc: hmac-sha1 debug2: compression ctos: none,zlib@openssh.com debug2: compression stoc: none,zlib@openssh.com debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: diffie-hellman-group-exchange-sha256 debug1: kex: host key algorithm: ssh-rsa debug1: kex: server->client cipher: aes128-ctr MAC: hmac-sha1 compression: none debug1: kex: client->server cipher: aes128-ctr MAC: hmac-sha1 compression: none debug3: send packet: type 34 debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(2048<7680<8192) sent debug3: receive packet: type 31 debug1: got SSH2_MSG_KEX_DH_GEX_GROUP debug2: bits set: 1017/2048 debug3: send packet: type 32 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug3: receive packet: type 33 debug1: got SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: ssh-rsa SHA256:KwU71WtBGcBXk4XddRCstz7qTsTmPFxOPISEmDq1mN8 debug3: hostkeys_foreach: reading file ""/home/emarq/.ssh/known_hosts"" debug3: record_hostkey: found key type RSA in file /home/emarq/.ssh/known_hosts:12 debug3: load_hostkeys: loaded 1 keys from 10.10.230.13 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the RSA key sent by the remote host is SHA256:KwU71WtBGcBXk4XddRCstz7qTsTmPFxOPISEmDq1mN8. Please contact your system administrator. Add correct host key in /home/emarq/.ssh/known_hosts to get rid of this message. Offending RSA key in /home/emarq/.ssh/known_hosts:12 remove with: ssh-keygen -f ""/home/emarq/.ssh/known_hosts"" -R 10.10.230.13 RSA host key for 10.10.230.13 has changed and you have requested strict checking. Host key verification failed. ```",1,nxos command paramiko ssh exception badhostkeyexception bad key exception failure issue type bug report component name nxos command ansible version ansible config file home emarq solutions network automation mas ansible cisco nexus ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables hostfile localstage hostfile mas ansible ssh user admin ansible ssh private key file home emarq ssh masd rsa host key checking false os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary an updated switch configuration is sent to the switch the switch get s reloaded post reload the ssh server key has changed on the switch the public key that is stored in known hosts is no longer valid paramiko throws an error and fails in the ansible config file the host key checking false is set steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name delete remote config nxos command provider cli host ansible host commands delete bootflash inventory hostname conf expected results ignore ssh key change and establish a connection actual results an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible sespbz ansible module nxos command py line in main file tmp ansible sespbz ansible module nxos command py line in main supports check mode true file tmp ansible sespbz ansible modlib zip ansible module utils network py line in init file tmp ansible sespbz ansible modlib zip ansible module utils network py line in connect file tmp ansible sespbz ansible modlib zip ansible module utils nxos py line in connect file tmp ansible sespbz ansible modlib zip ansible module utils shell py line in connect file tmp ansible sespbz ansible modlib zip ansible module utils shell py line in open file usr lib dist packages paramiko client py line in connect raise badhostkeyexception hostname server key our server key paramiko ssh exception badhostkeyexception fatal failed changed false failed true invocation module name nxos command module stderr traceback most recent call last n file tmp ansible sespbz ansible module nxos command py line in n main n file tmp ansible sespbz ansible module nxos command py line in main n supports check mode true n file tmp ansible sespbz ansible modlib zip ansible module utils network py line in init n file tmp ansible sespbz ansible modlib zip ansible module utils network py line in connect n file tmp ansible sespbz ansible modlib zip ansible module utils nxos py line in connect n file tmp ansible sespbz ansible modlib zip ansible module utils shell py line in connect n file tmp ansible sespbz ansible modlib zip ansible module utils shell py line in open n file usr lib dist packages paramiko client py line in connect n raise badhostkeyexception hostname server key our server key nparamiko ssh exception badhostkeyexception n module stdout msg module failure ssh response to the key emarq solutions network automation mas ansible cisco nexus ssh admin vvvvv openssh ubuntu openssl mar reading configuration data home emarq ssh config reading configuration data etc ssh ssh config etc ssh ssh config line applying options for resolving port ssh connect direct needpriv connecting to port connection established identity file home emarq ssh masd rsa type key load public no such file or directory identity file home emarq ssh masd rsa cert type enabling compatibility mode for protocol local version string ssh openssh ubuntu ssh exchange identification dcos sshd run in non fips mode remote protocol version remote software version openssh fips match openssh fips pat openssh compat fd setting o nonblock authenticating to as admin hostkeys foreach reading file home emarq ssh known hosts record hostkey found key type rsa in file home emarq ssh known hosts load hostkeys loaded keys from order hostkeyalgs prefer hostkeyalgs ssh rsa cert openssh com rsa rsa ssh rsa send packet type msg kexinit sent receive packet type msg kexinit received local client kexinit proposal kex algorithms libssh org ecdh ecdh ecdh diffie hellman group exchange diffie hellman group exchange diffie hellman ext info c host key algorithms ssh rsa cert openssh com rsa rsa ssh rsa ecdsa cert openssh com ecdsa cert openssh com ecdsa cert openssh com ssh cert openssh com ecdsa ecdsa ecdsa ssh ciphers ctos openssh com ctr ctr ctr gcm openssh com gcm openssh com cbc cbc cbc cbc ciphers stoc openssh com ctr ctr ctr gcm openssh com gcm openssh com cbc cbc cbc cbc macs ctos umac etm openssh com umac etm openssh com hmac etm openssh com hmac etm openssh com hmac etm openssh com umac openssh com umac openssh com hmac hmac hmac macs stoc umac etm openssh com umac etm openssh com hmac etm openssh com hmac etm openssh com hmac etm openssh com umac openssh com umac openssh com hmac hmac hmac compression ctos none zlib openssh com zlib compression stoc none zlib openssh com zlib languages ctos languages stoc first kex follows reserved peer server kexinit proposal kex algorithms diffie hellman group exchange diffie hellman group exchange diffie hellman diffie hellman host key algorithms ssh rsa ciphers ctos cbc cbc cbc cbc rijndael cbc lysator liu se ctr ctr ctr ciphers stoc cbc cbc cbc cbc rijndael cbc lysator liu se ctr ctr ctr macs ctos hmac macs stoc hmac compression ctos none zlib openssh com compression stoc none zlib openssh com languages ctos languages stoc first kex follows reserved kex algorithm diffie hellman group exchange kex host key algorithm ssh rsa kex server client cipher ctr mac hmac compression none kex client server cipher ctr mac hmac compression none send packet type msg kex dh gex request sent receive packet type got msg kex dh gex group bits set send packet type msg kex dh gex init sent receive packet type got msg kex dh gex reply server host key ssh rsa hostkeys foreach reading file home emarq ssh known hosts record hostkey found key type rsa in file home emarq ssh known hosts load hostkeys loaded keys from warning remote host identification has changed it is possible that someone is doing something nasty someone could be eavesdropping on you right now man in the middle attack it is also possible that a host key has just been changed the fingerprint for the rsa key sent by the remote host is please contact your system administrator add correct host key in home emarq ssh known hosts to get rid of this message offending rsa key in home emarq ssh known hosts remove with ssh keygen f home emarq ssh known hosts r rsa host key for has changed and you have requested strict checking host key verification failed ,1 983,4750329485.0,IssuesEvent,2016-10-22 09:05:55,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Gem module installs executables in directory different from gem executable.,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME gem ##### ANSIBLE VERSION ``` ansible 2.0.1.0``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible on Mac OS, remote host is Ubuntu 12.04 ##### SUMMARY When installing a gem as root, the gem executable (`jekyll` in my case) goes into `/root/.gem/ruby/2.3.0/bin`, and is therefore not in the PATH unless the path is specifically configured to include it. In contrast, running `gem install` on the command line puts the executable in `/usr/local/bin` which is already in the PATH. ##### STEPS TO REPRODUCE I used this code to use the gem module: ``` gem: name=jekyll state=latest ``` ...and this to install from the command line: ``` gem install jekyll ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"Gem module installs executables in directory different from gem executable. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME gem ##### ANSIBLE VERSION ``` ansible 2.0.1.0``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible on Mac OS, remote host is Ubuntu 12.04 ##### SUMMARY When installing a gem as root, the gem executable (`jekyll` in my case) goes into `/root/.gem/ruby/2.3.0/bin`, and is therefore not in the PATH unless the path is specifically configured to include it. In contrast, running `gem install` on the command line puts the executable in `/usr/local/bin` which is already in the PATH. ##### STEPS TO REPRODUCE I used this code to use the gem module: ``` gem: name=jekyll state=latest ``` ...and this to install from the command line: ``` gem install jekyll ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,gem module installs executables in directory different from gem executable issue type bug report component name gem ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running ansible on mac os remote host is ubuntu summary when installing a gem as root the gem executable jekyll in my case goes into root gem ruby bin and is therefore not in the path unless the path is specifically configured to include it in contrast running gem install on the command line puts the executable in usr local bin which is already in the path steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i used this code to use the gem module gem name jekyll state latest and this to install from the command line gem install jekyll expected results actual results ,1 894,4553934052.0,IssuesEvent,2016-09-13 07:35:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker: 'reloaded' state recreates container everytime if volumes-from is used,affects_2.0 bug_report cloud docker P3 waiting_on_maintainer,"Hi, according to the docs it should recreate the container if parameters changed: > ""reloaded"" asserts that all matching containers are running and restarts any that have any images or configuration out of date. But if you run this simple play multiple times, ansible will recreate the container every time even though no parameters have changed: ``` - hosts: 127.0.0.1 connection: local tasks: - docker: image=busybox name=test-data volumes=/data command=/bin/true state=present - docker: image: ubuntu name: test state: reloaded command: ""nc -l -k 2342"" volumes_from: test-data ``` In general state reloaded doesn't seem to work properly. See also #1129 which I just opened for a different issue with this.",True,"docker: 'reloaded' state recreates container everytime if volumes-from is used - Hi, according to the docs it should recreate the container if parameters changed: > ""reloaded"" asserts that all matching containers are running and restarts any that have any images or configuration out of date. But if you run this simple play multiple times, ansible will recreate the container every time even though no parameters have changed: ``` - hosts: 127.0.0.1 connection: local tasks: - docker: image=busybox name=test-data volumes=/data command=/bin/true state=present - docker: image: ubuntu name: test state: reloaded command: ""nc -l -k 2342"" volumes_from: test-data ``` In general state reloaded doesn't seem to work properly. See also #1129 which I just opened for a different issue with this.",1,docker reloaded state recreates container everytime if volumes from is used hi according to the docs it should recreate the container if parameters changed reloaded asserts that all matching containers are running and restarts any that have any images or configuration out of date but if you run this simple play multiple times ansible will recreate the container every time even though no parameters have changed hosts connection local tasks docker image busybox name test data volumes data command bin true state present docker image ubuntu name test state reloaded command nc l k volumes from test data in general state reloaded doesn t seem to work properly see also which i just opened for a different issue with this ,1 780,4386285886.0,IssuesEvent,2016-08-08 12:17:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,win_feature source parameter escapes incorrectly,bug_report waiting_on_maintainer windows,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_feature ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu Xenial 64 ##### SUMMARY When using an UNC-Path for the Source-Parameter of the win_feature module escaping doesn't work correctly. Any other combination of escape characters doesn't work. ##### STEPS TO REPRODUCE ``` --- # tasks file for vx_dotnet35 - name: Ensure .NET 3.5 feature is present win_feature: name=""NET-Framework-Features"" state=present restart=true source='\\defrasadm01\windvd\sources\sxs' ``` ##### EXPECTED RESULTS Installation of Feature ##### ACTUAL RESULTS ``` fatal: [172.29.84.91]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""win_feature""}, ""msg"": ""Failed to find source path \\\\defrasadm01\\windvd\\sources\\sxs""} ``` ",True,"win_feature source parameter escapes incorrectly - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_feature ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu Xenial 64 ##### SUMMARY When using an UNC-Path for the Source-Parameter of the win_feature module escaping doesn't work correctly. Any other combination of escape characters doesn't work. ##### STEPS TO REPRODUCE ``` --- # tasks file for vx_dotnet35 - name: Ensure .NET 3.5 feature is present win_feature: name=""NET-Framework-Features"" state=present restart=true source='\\defrasadm01\windvd\sources\sxs' ``` ##### EXPECTED RESULTS Installation of Feature ##### ACTUAL RESULTS ``` fatal: [172.29.84.91]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""win_feature""}, ""msg"": ""Failed to find source path \\\\defrasadm01\\windvd\\sources\\sxs""} ``` ",1,win feature source parameter escapes incorrectly issue type bug report component name win feature ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment ubuntu xenial summary when using an unc path for the source parameter of the win feature module escaping doesn t work correctly any other combination of escape characters doesn t work steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used tasks file for vx name ensure net feature is present win feature name net framework features state present restart true source windvd sources sxs expected results installation of feature actual results fatal failed changed false failed true invocation module name win feature msg failed to find source path windvd sources sxs ,1 790,4389731541.0,IssuesEvent,2016-08-08 23:20:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive overwrite symlinks,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report - Feature Idea ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ##### SUMMARY there is /var/run/something folder in arch and on remote host /var/run is symlink to /run after play /var/run from arch overwrites symlink /var/run ``` unarchive: src=arch.tar.gz dest=/ ``` ##### EXPECTED RESULTS I expect files form /var/run in arch.tar.gz will added to /var/run on remote host like --dereference option in tar ##### ACTUAL RESULTS symlink /var/run is overwriten by new folder from arch ",True,"unarchive overwrite symlinks - ##### ISSUE TYPE - Bug Report - Feature Idea ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ##### SUMMARY there is /var/run/something folder in arch and on remote host /var/run is symlink to /run after play /var/run from arch overwrites symlink /var/run ``` unarchive: src=arch.tar.gz dest=/ ``` ##### EXPECTED RESULTS I expect files form /var/run in arch.tar.gz will added to /var/run on remote host like --dereference option in tar ##### ACTUAL RESULTS symlink /var/run is overwriten by new folder from arch ",1,unarchive overwrite symlinks issue type bug report feature idea component name unarchive module ansible version ansible summary there is var run something folder in arch and on remote host var run is symlink to run after play var run from arch overwrites symlink var run unarchive src arch tar gz dest expected results i expect files form var run in arch tar gz will added to var run on remote host like dereference option in tar actual results symlink var run is overwriten by new folder from arch ,1 848,4506693372.0,IssuesEvent,2016-09-02 05:44:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,replace.py is failing with an UnboundLocalError,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME replace ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel d99c58ee55) last updated 2016/09/01 10:10:05 (GMT -700) lib/ansible/modules/core: (detached HEAD 7e79c59d38) last updated 2016/09/01 10:10:05 (GMT -700) lib/ansible/modules/extras: (detached HEAD e8a5442345) last updated 2016/09/01 10:10:05 (GMT -700) config file = /Users/jgrigonis/projects/omicia_ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OS X controller centos 6 target ##### SUMMARY Seems like a bad commit 5728ef89f0764be9066fc1bf0fbbf7785e60f4cb ##### STEPS TO REPRODUCE ``` - name: fix ctypes file replace: dest: '/usr/local/lib/python2.7/ctypes/__init__.py' regexp: '^( CFUNCTYPE.c_int..lambda: None.)' replace: ' # CFUNCTYPE(c_int)(lambda: None)' when: init.stat.exists == True ``` ##### EXPECTED RESULTS Do a replacement ##### ACTUAL RESULTS ``` {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_Kl6JDv/ansible_module_replace.py\"", line 179, in \r\n main()\r\n File \""/tmp/ansible_Kl6JDv/ansible_module_replace.py\"", line 173, in main\r\n module.exit_json(changed=changed, msg=msg, diff=diff)\r\nUnboundLocalError: local variable 'diff' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE""} ``` ",True,"replace.py is failing with an UnboundLocalError - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME replace ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel d99c58ee55) last updated 2016/09/01 10:10:05 (GMT -700) lib/ansible/modules/core: (detached HEAD 7e79c59d38) last updated 2016/09/01 10:10:05 (GMT -700) lib/ansible/modules/extras: (detached HEAD e8a5442345) last updated 2016/09/01 10:10:05 (GMT -700) config file = /Users/jgrigonis/projects/omicia_ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OS X controller centos 6 target ##### SUMMARY Seems like a bad commit 5728ef89f0764be9066fc1bf0fbbf7785e60f4cb ##### STEPS TO REPRODUCE ``` - name: fix ctypes file replace: dest: '/usr/local/lib/python2.7/ctypes/__init__.py' regexp: '^( CFUNCTYPE.c_int..lambda: None.)' replace: ' # CFUNCTYPE(c_int)(lambda: None)' when: init.stat.exists == True ``` ##### EXPECTED RESULTS Do a replacement ##### ACTUAL RESULTS ``` {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_Kl6JDv/ansible_module_replace.py\"", line 179, in \r\n main()\r\n File \""/tmp/ansible_Kl6JDv/ansible_module_replace.py\"", line 173, in main\r\n module.exit_json(changed=changed, msg=msg, diff=diff)\r\nUnboundLocalError: local variable 'diff' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE""} ``` ",1,replace py is failing with an unboundlocalerror issue type bug report component name replace ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file users jgrigonis projects omicia ansible ansible cfg configured module search path default w o overrides os environment os x controller centos target summary seems like a bad commit steps to reproduce name fix ctypes file replace dest usr local lib ctypes init py regexp cfunctype c int lambda none replace cfunctype c int lambda none when init stat exists true expected results do a replacement actual results changed false failed true module stderr module stdout traceback most recent call last r n file tmp ansible ansible module replace py line in r n main r n file tmp ansible ansible module replace py line in main r n module exit json changed changed msg msg diff diff r nunboundlocalerror local variable diff referenced before assignment r n msg module failure ,1 761,4363788812.0,IssuesEvent,2016-08-03 02:32:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add image update support to docker_service,cloud docker feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Currently it is not easily possible to update images used by docker_service. When an image does not exist, docker_service will pull it, but it won't update it once it is already there. The only way around this is to use `state:absent` and `remove_images:all` to kill the container and delete all images. This is suboptimal because you have to re-download the whole image even if just a tiny part changed and you have to invoke two instances of the docker_service command to update an existing service. ##### STEPS TO REPRODUCE I suggest adding a flag to `docker_service` which is controlling the update policy of the image in question, e.g. `update_image: yes` will perform a `docker-compose pull` before actually starting the service. If the pull yields a new image version, then this qualifies as a change of configuration and `docker_service` will then restart/recreate the container (according to the value of `recreate`). ##### ORIGIN This was originally requested in https://github.com/ansible/ansible/issues/16167. I didn't see it reopened here in the proper place, so I reopened the issue since it's also of interest to me.",True,"Add image update support to docker_service - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Currently it is not easily possible to update images used by docker_service. When an image does not exist, docker_service will pull it, but it won't update it once it is already there. The only way around this is to use `state:absent` and `remove_images:all` to kill the container and delete all images. This is suboptimal because you have to re-download the whole image even if just a tiny part changed and you have to invoke two instances of the docker_service command to update an existing service. ##### STEPS TO REPRODUCE I suggest adding a flag to `docker_service` which is controlling the update policy of the image in question, e.g. `update_image: yes` will perform a `docker-compose pull` before actually starting the service. If the pull yields a new image version, then this qualifies as a change of configuration and `docker_service` will then restart/recreate the container (according to the value of `recreate`). ##### ORIGIN This was originally requested in https://github.com/ansible/ansible/issues/16167. I didn't see it reopened here in the proper place, so I reopened the issue since it's also of interest to me.",1,add image update support to docker service issue type feature idea component name docker service ansible version ansible configuration n a os environment n a summary currently it is not easily possible to update images used by docker service when an image does not exist docker service will pull it but it won t update it once it is already there the only way around this is to use state absent and remove images all to kill the container and delete all images this is suboptimal because you have to re download the whole image even if just a tiny part changed and you have to invoke two instances of the docker service command to update an existing service steps to reproduce i suggest adding a flag to docker service which is controlling the update policy of the image in question e g update image yes will perform a docker compose pull before actually starting the service if the pull yields a new image version then this qualifies as a change of configuration and docker service will then restart recreate the container according to the value of recreate origin this was originally requested in i didn t see it reopened here in the proper place so i reopened the issue since it s also of interest to me ,1 1685,6574165796.0,IssuesEvent,2017-09-11 11:47:12,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Networking, more userfriendly error when raising BadHostKeyException",affects_2.2 feature_idea networking waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - ios_config ##### ANSIBLE VERSION ``` /opt/netprod/lib/env/netprod/lib/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6 DeprecationWarning ansible 2.2.0.0 config file = /home/paog01/ipops/ch-net-roles/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY A network device has changed the SSH key. ##### STEPS TO REPRODUCE Login to device to store the public key locally, then regenerate the ssh key on the device ##### EXPECTED RESULTS The task fails as expected. However the error is quite verbose and isn't caught. This error should be caught and reported back as a public key error instead of the ""crash"" below. ##### ACTUAL RESULTS ``` fatal: [router1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""/opt/netprod/lib/env/netprod/lib/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6\n DeprecationWarning\nTraceback (most recent call last):\n File \""/tmp/ansible_XHZdbt/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_XHZdbt/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 96, in open\n File \""/opt/netprod/lib/env/netprod/lib/python2.6/site-packages/paramiko/client.py\"", line 366, in connect\n raise BadHostKeyException(hostname, server_key, our_server_key)\nparamiko.ssh_exception.BadHostKeyException: ('router1', , )\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",True,"Networking, more userfriendly error when raising BadHostKeyException - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - ios_config ##### ANSIBLE VERSION ``` /opt/netprod/lib/env/netprod/lib/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6 DeprecationWarning ansible 2.2.0.0 config file = /home/paog01/ipops/ch-net-roles/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY A network device has changed the SSH key. ##### STEPS TO REPRODUCE Login to device to store the public key locally, then regenerate the ssh key on the device ##### EXPECTED RESULTS The task fails as expected. However the error is quite verbose and isn't caught. This error should be caught and reported back as a public key error instead of the ""crash"" below. ##### ACTUAL RESULTS ``` fatal: [router1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""/opt/netprod/lib/env/netprod/lib/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6\n DeprecationWarning\nTraceback (most recent call last):\n File \""/tmp/ansible_XHZdbt/ansible_module_ios_command.py\"", line 237, in \n main()\n File \""/tmp/ansible_XHZdbt/ansible_module_ios_command.py\"", line 200, in main\n runner.add_command(**cmd)\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 147, in add_command\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/network.py\"", line 117, in cli\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/network.py\"", line 148, in connect\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 226, in connect\n File \""/tmp/ansible_XHZdbt/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 96, in open\n File \""/opt/netprod/lib/env/netprod/lib/python2.6/site-packages/paramiko/client.py\"", line 366, in connect\n raise BadHostKeyException(hostname, server_key, our_server_key)\nparamiko.ssh_exception.BadHostKeyException: ('router1', , )\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",1,networking more userfriendly error when raising badhostkeyexception issue type feature idea component name ios config ansible version opt netprod lib env netprod lib site packages cryptography init py deprecationwarning python is no longer supported by the python core team please upgrade your python a future version of cryptography will drop support for python deprecationwarning ansible config file home ipops ch net roles ansible cfg configured module search path default w o overrides configuration n a os environment n a summary a network device has changed the ssh key steps to reproduce login to device to store the public key locally then regenerate the ssh key on the device expected results the task fails as expected however the error is quite verbose and isn t caught this error should be caught and reported back as a public key error instead of the crash below actual results fatal failed changed false failed true module stderr opt netprod lib env netprod lib site packages cryptography init py deprecationwarning python is no longer supported by the python core team please upgrade your python a future version of cryptography will drop support for python n deprecationwarning ntraceback most recent call last n file tmp ansible xhzdbt ansible module ios command py line in n main n file tmp ansible xhzdbt ansible module ios command py line in main n runner add command cmd n file tmp ansible xhzdbt ansible modlib zip ansible module utils netcli py line in add command n file tmp ansible xhzdbt ansible modlib zip ansible module utils network py line in cli n file tmp ansible xhzdbt ansible modlib zip ansible module utils network py line in connect n file tmp ansible xhzdbt ansible modlib zip ansible module utils ios py line in connect n file tmp ansible xhzdbt ansible modlib zip ansible module utils shell py line in connect n file tmp ansible xhzdbt ansible modlib zip ansible module utils shell py line in open n file opt netprod lib env netprod lib site packages paramiko client py line in connect n raise badhostkeyexception hostname server key our server key nparamiko ssh exception badhostkeyexception n module stdout msg module failure ,1 988,4756341225.0,IssuesEvent,2016-10-24 13:45:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Fix minor typos in modules,affects_2.3 docs_report networking waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_template The ios_template module has the following text: ``` Deprecated in 2.2. Use eos_config instead ``` I'm guessing this wasn't an attempt to sell more Arista switches :-)",True,"Fix minor typos in modules - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_template The ios_template module has the following text: ``` Deprecated in 2.2. Use eos_config instead ``` I'm guessing this wasn't an attempt to sell more Arista switches :-)",1,fix minor typos in modules issue type documentation report component name ios template the ios template module has the following text deprecated in use eos config instead i m guessing this wasn t an attempt to sell more arista switches ,1 1731,6574838367.0,IssuesEvent,2017-09-11 14:14:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Network modules should provide the actual device CLI interaction logging (raw output),affects_2.3 feature_idea networking waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME network, netcli, ios_command.py, ios_config.py ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel 15ed88547f) last updated 2016/10/25 15:34:39 (GMT -400) lib/ansible/modules/core: (detached HEAD 124bb92416) last updated 2016/10/25 15:34:40 (GMT -400) lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/25 15:34:40 (GMT -400) ``` ##### SUMMARY Currently the network modules do not return any device debugging or raw device output. This is a concern on one hand because network folk are reluctant to run tools on their network without knowing exactly what they are doing on their gear. This is also a challenge when troubleshooting why commands are not working properly with devices that may have multi-level prompts, etc. Being able to see the actual output from the device is key to this. Yes you can later run commands manually to figure it out, but ansible should be able to provide more informative errors. I have started down the path to add this feature to my branch, but as the path to do so is not clear and may require some rework across a number of modules, I would solicit input from the team about how they might like to see this approached. Currently the execute() functions for example only returns the output. Returning a dictionary would offer more flexibility, but there are a lot of places that would need to be changed to support that. There are also two code paths that need to handle this extra data (The normal and the Exceptions path..) I did a proof of concept by adding this to a stderr value when exceptions occur and then had to follow the path from ShellError to NetworkError for example to get this passed back. As execute() currently only returns one output, I simply tacked the debug onto that for a quick test. A couple of examples of the output from my proof are below for added context. Again, I welcome feedback on this feature and can work on it if there is some consensus. Likely I would expect an added parameter in the playbook that turns this feature on, but also perhaps when errors occur the output from the device should be available for troubleshooting. Whether it is called 'stderr' or 'raw' or 'output' etc is also an open question. ##### EXPECTED RESULTS ``` ""msg"": ""timeout waiting for next prompt from remote device"", ""stderr"": ""Ciscodev# copy running-config flash:test\r\nDestination filename [test]? "", ""stderr_lines"": [ [ ""ciscodev# copy running-config flash:test\r"", ""Destination filename [test]?"" ] ] ``` ",True,"Network modules should provide the actual device CLI interaction logging (raw output) - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME network, netcli, ios_command.py, ios_config.py ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel 15ed88547f) last updated 2016/10/25 15:34:39 (GMT -400) lib/ansible/modules/core: (detached HEAD 124bb92416) last updated 2016/10/25 15:34:40 (GMT -400) lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/25 15:34:40 (GMT -400) ``` ##### SUMMARY Currently the network modules do not return any device debugging or raw device output. This is a concern on one hand because network folk are reluctant to run tools on their network without knowing exactly what they are doing on their gear. This is also a challenge when troubleshooting why commands are not working properly with devices that may have multi-level prompts, etc. Being able to see the actual output from the device is key to this. Yes you can later run commands manually to figure it out, but ansible should be able to provide more informative errors. I have started down the path to add this feature to my branch, but as the path to do so is not clear and may require some rework across a number of modules, I would solicit input from the team about how they might like to see this approached. Currently the execute() functions for example only returns the output. Returning a dictionary would offer more flexibility, but there are a lot of places that would need to be changed to support that. There are also two code paths that need to handle this extra data (The normal and the Exceptions path..) I did a proof of concept by adding this to a stderr value when exceptions occur and then had to follow the path from ShellError to NetworkError for example to get this passed back. As execute() currently only returns one output, I simply tacked the debug onto that for a quick test. A couple of examples of the output from my proof are below for added context. Again, I welcome feedback on this feature and can work on it if there is some consensus. Likely I would expect an added parameter in the playbook that turns this feature on, but also perhaps when errors occur the output from the device should be available for troubleshooting. Whether it is called 'stderr' or 'raw' or 'output' etc is also an open question. ##### EXPECTED RESULTS ``` ""msg"": ""timeout waiting for next prompt from remote device"", ""stderr"": ""Ciscodev# copy running-config flash:test\r\nDestination filename [test]? "", ""stderr_lines"": [ [ ""ciscodev# copy running-config flash:test\r"", ""Destination filename [test]?"" ] ] ``` ",1,network modules should provide the actual device cli interaction logging raw output issue type feature idea component name network netcli ios command py ios config py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt summary currently the network modules do not return any device debugging or raw device output this is a concern on one hand because network folk are reluctant to run tools on their network without knowing exactly what they are doing on their gear this is also a challenge when troubleshooting why commands are not working properly with devices that may have multi level prompts etc being able to see the actual output from the device is key to this yes you can later run commands manually to figure it out but ansible should be able to provide more informative errors i have started down the path to add this feature to my branch but as the path to do so is not clear and may require some rework across a number of modules i would solicit input from the team about how they might like to see this approached currently the execute functions for example only returns the output returning a dictionary would offer more flexibility but there are a lot of places that would need to be changed to support that there are also two code paths that need to handle this extra data the normal and the exceptions path i did a proof of concept by adding this to a stderr value when exceptions occur and then had to follow the path from shellerror to networkerror for example to get this passed back as execute currently only returns one output i simply tacked the debug onto that for a quick test a couple of examples of the output from my proof are below for added context again i welcome feedback on this feature and can work on it if there is some consensus likely i would expect an added parameter in the playbook that turns this feature on but also perhaps when errors occur the output from the device should be available for troubleshooting whether it is called stderr or raw or output etc is also an open question expected results msg timeout waiting for next prompt from remote device stderr ciscodev copy running config flash test r ndestination filename stderr lines ciscodev copy running config flash test r destination filename ,1 1713,6574460288.0,IssuesEvent,2017-09-11 12:58:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_image returns changed even if the image is in the latest version,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `docker_image` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/albertom/ciao/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY Pulling a docker image with `force=yes` always returns `changed` even if the image is in its latest version. ##### STEPS TO REPRODUCE The following task always return changed even when the image is in its latest version. ``` tasks: - name: Download busybox image docker_image: name=busybox state=present force=yes ``` But running `docker pull` from shell you could tell if the image was updated or not ``` # docker pull busybox Using default tag: latest latest: Pulling from library/busybox 56bec22e3559: Pull complete Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 Status: Downloaded newer image for busybox:latest # docker pull busybox Using default tag: latest latest: Pulling from library/busybox Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 Status: Image is up to date for busybox:latest ``` ##### EXPECTED RESULTS It should return unchanged if the image was not updated. ##### ACTUAL RESULTS ``` TASK [Download Download busybox image] ********************************* Thursday 03 November 2016 11:03:34 -0500 (0:00:00.567) 0:00:00.580 ***** changed: [wonderwoman.intel.com] => {""actions"": [""Pulled image busybox:latest""], ""changed"": true, ""image"": {""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""sh""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""4a74292706a0"", ""Image"": ""sha256:1679bae2167496818312013654f5c66a16e185d0a0f6b762b53c8558014457c6"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Container"": ""8bb318a3b4672c53a1747991c95fff3306eea13ec308740ebe0c81b56ece530f"", ""ContainerConfig"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/sh"", ""-c"", ""#(nop) "", ""CMD [\""sh\""]""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""4a74292706a0"", ""Image"": ""sha256:1679bae2167496818312013654f5c66a16e185d0a0f6b762b53c8558014457c6"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-10-07T21:03:58.469866982Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": {""Data"": null, ""Name"": ""aufs""}, ""Id"": ""sha256:e02e811dd08fd49e7f6032625495118e63f597eb150403d02e3238af1df240ba"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [""busybox@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912""], ""RepoTags"": [""busybox:latest""], ""RootFS"": {""Layers"": [""sha256:e88b3f82283bc59d5e0df427c824e9f95557e661fcb0ea15fb0fb6f97760f9d9""], ""Type"": ""layers""}, ""Size"": 1093484, ""VirtualSize"": 1093484}} ``` ",True,"docker_image returns changed even if the image is in the latest version - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `docker_image` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/albertom/ciao/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY Pulling a docker image with `force=yes` always returns `changed` even if the image is in its latest version. ##### STEPS TO REPRODUCE The following task always return changed even when the image is in its latest version. ``` tasks: - name: Download busybox image docker_image: name=busybox state=present force=yes ``` But running `docker pull` from shell you could tell if the image was updated or not ``` # docker pull busybox Using default tag: latest latest: Pulling from library/busybox 56bec22e3559: Pull complete Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 Status: Downloaded newer image for busybox:latest # docker pull busybox Using default tag: latest latest: Pulling from library/busybox Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 Status: Image is up to date for busybox:latest ``` ##### EXPECTED RESULTS It should return unchanged if the image was not updated. ##### ACTUAL RESULTS ``` TASK [Download Download busybox image] ********************************* Thursday 03 November 2016 11:03:34 -0500 (0:00:00.567) 0:00:00.580 ***** changed: [wonderwoman.intel.com] => {""actions"": [""Pulled image busybox:latest""], ""changed"": true, ""image"": {""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""sh""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""4a74292706a0"", ""Image"": ""sha256:1679bae2167496818312013654f5c66a16e185d0a0f6b762b53c8558014457c6"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Container"": ""8bb318a3b4672c53a1747991c95fff3306eea13ec308740ebe0c81b56ece530f"", ""ContainerConfig"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/sh"", ""-c"", ""#(nop) "", ""CMD [\""sh\""]""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""4a74292706a0"", ""Image"": ""sha256:1679bae2167496818312013654f5c66a16e185d0a0f6b762b53c8558014457c6"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-10-07T21:03:58.469866982Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": {""Data"": null, ""Name"": ""aufs""}, ""Id"": ""sha256:e02e811dd08fd49e7f6032625495118e63f597eb150403d02e3238af1df240ba"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [""busybox@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912""], ""RepoTags"": [""busybox:latest""], ""RootFS"": {""Layers"": [""sha256:e88b3f82283bc59d5e0df427c824e9f95557e661fcb0ea15fb0fb6f97760f9d9""], ""Type"": ""layers""}, ""Size"": 1093484, ""VirtualSize"": 1093484}} ``` ",1,docker image returns changed even if the image is in the latest version issue type bug report component name docker image ansible version ansible config file home albertom ciao ansible cfg configured module search path default w o overrides configuration none os environment n a summary pulling a docker image with force yes always returns changed even if the image is in its latest version steps to reproduce the following task always return changed even when the image is in its latest version tasks name download busybox image docker image name busybox state present force yes but running docker pull from shell you could tell if the image was updated or not docker pull busybox using default tag latest latest pulling from library busybox pull complete digest status downloaded newer image for busybox latest docker pull busybox using default tag latest latest pulling from library busybox digest status image is up to date for busybox latest expected results it should return unchanged if the image was not updated actual results task thursday november changed actions changed true image architecture author comment config attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env hostname image labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir container containerconfig attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env hostname image labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir created dockerversion graphdriver data null name aufs id os linux parent repodigests repotags rootfs layers type layers size virtualsize ,1 893,4553931704.0,IssuesEvent,2016-09-13 07:35:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Docker: 'reloaded' state not recreating container,affects_2.0 bug_report cloud docker P3 waiting_on_maintainer,"Hi, according to the docs it should recreate the container if parameters changed: > ""reloaded"" asserts that all matching containers are running and restarts any that have any images or configuration out of date. But that doesn't seem to work. Sometimes it doesn't reload even though parameters changed: ``` - hosts: 127.0.0.1 connection: local tasks: - docker: image: ubuntu name: test state: reloaded command: ""nc -l -k 2342"" ``` Running this, creates a container. Running it again, doesn't touch it. So far so good. Now I added 'restart_policy: always', ran the play again and it didn't recreate the container. I'm running the latest devel branch (just pulled, including the sub repos).",True,"Docker: 'reloaded' state not recreating container - Hi, according to the docs it should recreate the container if parameters changed: > ""reloaded"" asserts that all matching containers are running and restarts any that have any images or configuration out of date. But that doesn't seem to work. Sometimes it doesn't reload even though parameters changed: ``` - hosts: 127.0.0.1 connection: local tasks: - docker: image: ubuntu name: test state: reloaded command: ""nc -l -k 2342"" ``` Running this, creates a container. Running it again, doesn't touch it. So far so good. Now I added 'restart_policy: always', ran the play again and it didn't recreate the container. I'm running the latest devel branch (just pulled, including the sub repos).",1,docker reloaded state not recreating container hi according to the docs it should recreate the container if parameters changed reloaded asserts that all matching containers are running and restarts any that have any images or configuration out of date but that doesn t seem to work sometimes it doesn t reload even though parameters changed hosts connection local tasks docker image ubuntu name test state reloaded command nc l k running this creates a container running it again doesn t touch it so far so good now i added restart policy always ran the play again and it didn t recreate the container i m running the latest devel branch just pulled including the sub repos ,1 1013,4793975471.0,IssuesEvent,2016-10-31 19:42:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,synchronize fails with become module,affects_2.1 bug_report waiting_on_maintainer,"#### ISSUE TYPE Bug Report #### COMPONENT NAME synchronize ##### ANSIBLE VERSION 2.1.0.0 #### CONFIGURATION Default #### OS / ENVIRONMENT RHEL 6.7 #### SUMMARY I am trying to use the synchronize module to copy file from Ansible node to remote node. I want these files to exist as UserB on the remote nodes but I do not have access to UserB directly. Instead UserA has sudo privileges to switch to UserB. So im logging in as UserA. My environment file says: ansible_ssh_user=UserA ansible_ssh_pass= ansible_become_method=sudo ansible_become_user=UserB ansible_become_pass= My task is : ``` - name: Copy and unarchive webapps node. synchronize: src=/home/ansible/templates/app/Sprint6/webapps dest=/opt/msdp/ca/app checksum=yes become: yes ``` But when I run the playbook, I get an ERROR: On the remote node, only UserB can write under : /opt/msdp/ca/app #### STEPS TO REPRODUCE #### EXPECTED RESULTS I should have my files copied to remote user as UserB #### ACTUAL RESULTS fatal: [5.232.57.247]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/rsync --delay-updates -F --compress --checksum --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no -o ControlMaster=auto -o ControlPersist=60s' --rsync-path=\""sudo rsync\"" --out-format='<>%i %n%L' \""/home/ansible/templates/app/Sprint6/webapps\"" \""UserA@5.232.57.247:/opt/msdp/ca/app\"""", ""failed"": true, ""msg"": ""sudo: sorry, you must have a tty to run sudo\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]\n"", ""rc"": 12} ",True,"synchronize fails with become module - #### ISSUE TYPE Bug Report #### COMPONENT NAME synchronize ##### ANSIBLE VERSION 2.1.0.0 #### CONFIGURATION Default #### OS / ENVIRONMENT RHEL 6.7 #### SUMMARY I am trying to use the synchronize module to copy file from Ansible node to remote node. I want these files to exist as UserB on the remote nodes but I do not have access to UserB directly. Instead UserA has sudo privileges to switch to UserB. So im logging in as UserA. My environment file says: ansible_ssh_user=UserA ansible_ssh_pass= ansible_become_method=sudo ansible_become_user=UserB ansible_become_pass= My task is : ``` - name: Copy and unarchive webapps node. synchronize: src=/home/ansible/templates/app/Sprint6/webapps dest=/opt/msdp/ca/app checksum=yes become: yes ``` But when I run the playbook, I get an ERROR: On the remote node, only UserB can write under : /opt/msdp/ca/app #### STEPS TO REPRODUCE #### EXPECTED RESULTS I should have my files copied to remote user as UserB #### ACTUAL RESULTS fatal: [5.232.57.247]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/rsync --delay-updates -F --compress --checksum --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no -o ControlMaster=auto -o ControlPersist=60s' --rsync-path=\""sudo rsync\"" --out-format='<>%i %n%L' \""/home/ansible/templates/app/Sprint6/webapps\"" \""UserA@5.232.57.247:/opt/msdp/ca/app\"""", ""failed"": true, ""msg"": ""sudo: sorry, you must have a tty to run sudo\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]\n"", ""rc"": 12} ",1,synchronize fails with become module issue type bug report component name synchronize ansible version configuration default os environment rhel summary i am trying to use the synchronize module to copy file from ansible node to remote node i want these files to exist as userb on the remote nodes but i do not have access to userb directly instead usera has sudo privileges to switch to userb so im logging in as usera my environment file says ansible ssh user usera ansible ssh pass ansible become method sudo ansible become user userb ansible become pass my task is name copy and unarchive webapps node synchronize src home ansible templates app webapps dest opt msdp ca app checksum yes become yes but when i run the playbook i get an error on the remote node only userb can write under opt msdp ca app steps to reproduce expected results i should have my files copied to remote user as userb actual results fatal failed changed false cmd usr bin rsync delay updates f compress checksum archive rsh ssh s none o stricthostkeychecking no o controlmaster auto o controlpersist rsync path sudo rsync out format i n l home ansible templates app webapps usera opt msdp ca app failed true msg sudo sorry you must have a tty to run sudo nrsync connection unexpectedly closed bytes received so far nrsync error error in rsync protocol data stream code at io c n rc ,1 954,4699525802.0,IssuesEvent,2016-10-12 15:54:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_asg sets desired_capacity=0 when min_size=0 - terminates existing instances,affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT AmazonLinux 4.4.19-29.55.amzn1.x86_64 amzn-ami-hvm-2016.03.0.x86_64-gp2 (ami-31328842) ##### SUMMARY Setting min_size to zero also sets desired_capacity to zero ##### STEPS TO REPRODUCE Playbook creates ASG with the following values: ``` ec2_asg: ... min_size: 0 max_size: 3 desired_capacity: 1 ... ``` ##### EXPECTED RESULTS ASG has min_size=0 but desired_capacity remains as set and no instances are terminated unnecessarily. ##### ACTUAL RESULTS ASG has min_size=0 and desired_capacity=0, existing instances are terminated ",True,"ec2_asg sets desired_capacity=0 when min_size=0 - terminates existing instances - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT AmazonLinux 4.4.19-29.55.amzn1.x86_64 amzn-ami-hvm-2016.03.0.x86_64-gp2 (ami-31328842) ##### SUMMARY Setting min_size to zero also sets desired_capacity to zero ##### STEPS TO REPRODUCE Playbook creates ASG with the following values: ``` ec2_asg: ... min_size: 0 max_size: 3 desired_capacity: 1 ... ``` ##### EXPECTED RESULTS ASG has min_size=0 but desired_capacity remains as set and no instances are terminated unnecessarily. ##### ACTUAL RESULTS ASG has min_size=0 and desired_capacity=0, existing instances are terminated ",1, asg sets desired capacity when min size terminates existing instances issue type bug report component name asg ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment amazonlinux amzn ami hvm ami summary setting min size to zero also sets desired capacity to zero steps to reproduce playbook creates asg with the following values asg min size max size desired capacity expected results asg has min size but desired capacity remains as set and no instances are terminated unnecessarily actual results asg has min size and desired capacity existing instances are terminated ,1 849,4506717858.0,IssuesEvent,2016-09-02 05:54:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed, DigitalOcean key generation,bug_report cloud digital_ocean feature_idea P3 waiting_on_maintainer,"##### Issue Type: Bug Report ##### Component Name: digital_ocean_sshkey ##### Ansible Version: ansible 1.8.2 configured module search path = None ##### Environment: Mac OS X ##### Summary: If an SSH key exists in digitalocean, trying to ""state=present"" the same SSH public key with another name fails ##### Steps To Reproduce: - Create an SSH key on the WebUI, and name it key1 - run this ansible task: ``` local_action: digital_ocean_sshkey state=present name=key2 ssh_pub_key={{digitalocean_ssh_key_file}} client_id={{digitalocean_client_id}} api_key={{digitalocean_api_key}} ``` ##### Expected Results: One of: - An explanatory error message - the key being renamed - the id of the already existing key being returned (possibly with the name key1) ##### Actual Results: ``` failed: [127.0.0.1 -> 127.0.0.1] => {""failed"": true} msg: SSH Key failed to be created ```",True," DigitalOcean key generation - ##### Issue Type: Bug Report ##### Component Name: digital_ocean_sshkey ##### Ansible Version: ansible 1.8.2 configured module search path = None ##### Environment: Mac OS X ##### Summary: If an SSH key exists in digitalocean, trying to ""state=present"" the same SSH public key with another name fails ##### Steps To Reproduce: - Create an SSH key on the WebUI, and name it key1 - run this ansible task: ``` local_action: digital_ocean_sshkey state=present name=key2 ssh_pub_key={{digitalocean_ssh_key_file}} client_id={{digitalocean_client_id}} api_key={{digitalocean_api_key}} ``` ##### Expected Results: One of: - An explanatory error message - the key being renamed - the id of the already existing key being returned (possibly with the name key1) ##### Actual Results: ``` failed: [127.0.0.1 -> 127.0.0.1] => {""failed"": true} msg: SSH Key failed to be created ```",1, digitalocean key generation issue type bug report component name digital ocean sshkey ansible version ansible configured module search path none environment mac os x summary if an ssh key exists in digitalocean trying to state present the same ssh public key with another name fails steps to reproduce create an ssh key on the webui and name it run this ansible task local action digital ocean sshkey state present name ssh pub key digitalocean ssh key file client id digitalocean client id api key digitalocean api key expected results one of an explanatory error message the key being renamed the id of the already existing key being returned possibly with the name actual results failed failed true msg ssh key failed to be created ,1 1300,5541906229.0,IssuesEvent,2017-03-22 13:58:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Exception while running ios_command which times out after correctly executing a command,affects_2.1 bug_report networking P2 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/lib/python2.7/dist-packages/ansible'] ``` ##### CONFIGURATION inventory = ./hosts roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles fact_caching = redis fact_caching_timeout = 86400 ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **IOSv 15.4** (20140730:011659) ##### SUMMARY ios_command executes the command correctly and then runs into an exception ending as a ""socket.timeout"". The same crash happens even when: - password is removed from provider to try to influence Ansible to use the SSH key instead - SSH key is removed from provider to try to influence Ansible to use the password instead - other types of IOS commands are passed ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [iosv] 192.168.137.254 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/iosv/connections.yml ``` connections: ... ssh: host: ""{{ inventory_hostname }}"" port: 22 username: admin password: Cisco2016$ authorize: yes auth_pass: Cisco2016$ ssh_keyfile: ""~/.ssh/id_rsa"" timeout: 10 ``` **Role**: ios_pull_config ``` - include_vars: ""{{ role_path }}/../../group_vars/{{ hostvars[inventory_hostname].group_names[0] }}/connections.yml"" - name: Fetching the {{ config }} from the remote node ios_command: provider: ""{{ connections.ssh }}"" commands: - ""show {{ config }}"" register: show ``` **Playbook**: ``` - name: Pull IOS/IOSv/IOSv-L2 startup and running configs hosts: iosv roles: - { role: ios_pull_config, config: startup-config } - { role: ios_pull_config, config: running-config } ``` ##### EXPECTED RESULTS The command should obviously not crash. ##### ACTUAL RESULTS ``` PLAYBOOK: ios_pull_config.yml ************************************************** 1 plays in ios_pull_config.yml PLAY [Pull IOS/IOSv/IOSv-L2 startup and running configs] *********************** TASK [print_variables : Print all variables for each remote device] ************ task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles/print_variables/tasks/main.yml:27 ... TASK [ios_pull_config : include_vars] ****************************************** task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles/ios_pull_config/tasks/main.yml:36 ok: [192.168.137.254] => {""ansible_facts"": {""ansible_connection"": ""{{ connections.compute }}"", ""connections"": {""compute"": ""local"", ""os_family"": null, ""os_release"": null, ..... .... TASK [ios_pull_config : Fetching the startup-config from the remote node] ****** task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles/ios_pull_config/tasks/main.yml:38 ... ok: [192.168.137.254] => {""changed"": false, ""invocation"": {""module_args"": {""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": true, ""commands"": [""show startup-config""], ""host"": ""192.168.137.254"", ""interval"": 1, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""provider"": ""{'username': 'admin', 'authorize': True, 'ssh_keyfile': '~/.ssh/id_rsa', 'auth_pass': '********', 'key_type': 'rsa', 'host': '192.168.137.254', 'version': 2, 'auth_retries': 3, 'timeout': 10, 'password': '********', 'port': 22, 'transport': 'cli'}"", ""retries"": 10, ""ssh_keyfile"": ""/root/.ssh/id_rsa"", ""timeout"": 10, ""username"": ""admin"", ""waitfor"": null}, ""module_name"": ""ios_command""}, ""stdout"": [""Using 6464 out of 262144 bytes\n!\n! Last configuration change at 13:54:49 UTC Fri Jul 1 2016 by admin\n!\nversion 15.4\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nservice password-encryption\n!\nhostname vIOS_1\n!\nboot-start-marker\nboot-end-marker\n!\n!\nenable secret 5 $1$HNyP$Gf3QFB4kSxl9..lFPVy4P/ .... **An exception occurred during task execution**. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_vNRL0L/ansible_module_ios_command.py"", line 169, in main() File ""/tmp/ansible_vNRL0L/ansible_module_ios_command.py"", line 144, in main response = module.execute(commands) File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/ios.py"", line 151, in execute File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/ios.py"", line 134, in connect File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/ios.py"", line 87, in connect File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/shell.py"", line 108, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 292, in connect retry_on_signal(lambda: sock.connect(addr)) File ""/usr/local/lib/python2.7/dist-packages/paramiko/util.py"", line 270, in retry_on_signal return function() File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 292, in retry_on_signal(lambda: sock.connect(addr)) File ""/usr/lib/python2.7/socket.py"", line 228, in meth return getattr(self._sock,name)(*args) **socket.timeout**: timed out ... ``` **Whole log** [ios_pull_config.log.txt](https://github.com/ansible/ansible-modules-core/files/343757/ios_pull_config.log.txt) ",True,"Exception while running ios_command which times out after correctly executing a command - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/lib/python2.7/dist-packages/ansible'] ``` ##### CONFIGURATION inventory = ./hosts roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles fact_caching = redis fact_caching_timeout = 86400 ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **IOSv 15.4** (20140730:011659) ##### SUMMARY ios_command executes the command correctly and then runs into an exception ending as a ""socket.timeout"". The same crash happens even when: - password is removed from provider to try to influence Ansible to use the SSH key instead - SSH key is removed from provider to try to influence Ansible to use the password instead - other types of IOS commands are passed ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [iosv] 192.168.137.254 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/iosv/connections.yml ``` connections: ... ssh: host: ""{{ inventory_hostname }}"" port: 22 username: admin password: Cisco2016$ authorize: yes auth_pass: Cisco2016$ ssh_keyfile: ""~/.ssh/id_rsa"" timeout: 10 ``` **Role**: ios_pull_config ``` - include_vars: ""{{ role_path }}/../../group_vars/{{ hostvars[inventory_hostname].group_names[0] }}/connections.yml"" - name: Fetching the {{ config }} from the remote node ios_command: provider: ""{{ connections.ssh }}"" commands: - ""show {{ config }}"" register: show ``` **Playbook**: ``` - name: Pull IOS/IOSv/IOSv-L2 startup and running configs hosts: iosv roles: - { role: ios_pull_config, config: startup-config } - { role: ios_pull_config, config: running-config } ``` ##### EXPECTED RESULTS The command should obviously not crash. ##### ACTUAL RESULTS ``` PLAYBOOK: ios_pull_config.yml ************************************************** 1 plays in ios_pull_config.yml PLAY [Pull IOS/IOSv/IOSv-L2 startup and running configs] *********************** TASK [print_variables : Print all variables for each remote device] ************ task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles/print_variables/tasks/main.yml:27 ... TASK [ios_pull_config : include_vars] ****************************************** task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles/ios_pull_config/tasks/main.yml:36 ok: [192.168.137.254] => {""ansible_facts"": {""ansible_connection"": ""{{ connections.compute }}"", ""connections"": {""compute"": ""local"", ""os_family"": null, ""os_release"": null, ..... .... TASK [ios_pull_config : Fetching the startup-config from the remote node] ****** task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles/ios_pull_config/tasks/main.yml:38 ... ok: [192.168.137.254] => {""changed"": false, ""invocation"": {""module_args"": {""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": true, ""commands"": [""show startup-config""], ""host"": ""192.168.137.254"", ""interval"": 1, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""provider"": ""{'username': 'admin', 'authorize': True, 'ssh_keyfile': '~/.ssh/id_rsa', 'auth_pass': '********', 'key_type': 'rsa', 'host': '192.168.137.254', 'version': 2, 'auth_retries': 3, 'timeout': 10, 'password': '********', 'port': 22, 'transport': 'cli'}"", ""retries"": 10, ""ssh_keyfile"": ""/root/.ssh/id_rsa"", ""timeout"": 10, ""username"": ""admin"", ""waitfor"": null}, ""module_name"": ""ios_command""}, ""stdout"": [""Using 6464 out of 262144 bytes\n!\n! Last configuration change at 13:54:49 UTC Fri Jul 1 2016 by admin\n!\nversion 15.4\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nservice password-encryption\n!\nhostname vIOS_1\n!\nboot-start-marker\nboot-end-marker\n!\n!\nenable secret 5 $1$HNyP$Gf3QFB4kSxl9..lFPVy4P/ .... **An exception occurred during task execution**. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_vNRL0L/ansible_module_ios_command.py"", line 169, in main() File ""/tmp/ansible_vNRL0L/ansible_module_ios_command.py"", line 144, in main response = module.execute(commands) File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/ios.py"", line 151, in execute File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/ios.py"", line 134, in connect File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/ios.py"", line 87, in connect File ""/tmp/ansible_vNRL0L/ansible_modlib.zip/ansible/module_utils/shell.py"", line 108, in open File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 292, in connect retry_on_signal(lambda: sock.connect(addr)) File ""/usr/local/lib/python2.7/dist-packages/paramiko/util.py"", line 270, in retry_on_signal return function() File ""/usr/local/lib/python2.7/dist-packages/paramiko/client.py"", line 292, in retry_on_signal(lambda: sock.connect(addr)) File ""/usr/lib/python2.7/socket.py"", line 228, in meth return getattr(self._sock,name)(*args) **socket.timeout**: timed out ... ``` **Whole log** [ios_pull_config.log.txt](https://github.com/ansible/ansible-modules-core/files/343757/ios_pull_config.log.txt) ",1,exception while running ios command which times out after correctly executing a command issue type bug report component name ios command ansible version ansible config file etc ansible ansible cfg configured module search path configuration inventory hosts roles path home actionmystique program files ubuntu ansible git ansible roles fact caching redis fact caching timeout os environment host ubuntu target iosv summary ios command executes the command correctly and then runs into an exception ending as a socket timeout the same crash happens even when password is removed from provider to try to influence ansible to use the ssh key instead ssh key is removed from provider to try to influence ansible to use the password instead other types of ios commands are passed steps to reproduce inventory hosts structure passed as provider connections ssh defined in group vars iosv connections yml connections ssh host inventory hostname port username admin password authorize yes auth pass ssh keyfile ssh id rsa timeout role ios pull config include vars role path group vars hostvars group names connections yml name fetching the config from the remote node ios command provider connections ssh commands show config register show playbook name pull ios iosv iosv startup and running configs hosts iosv roles role ios pull config config startup config role ios pull config config running config expected results the command should obviously not crash actual results playbook ios pull config yml plays in ios pull config yml play task task path home actionmystique program files ubuntu ansible git ansible roles print variables tasks main yml task task path home actionmystique program files ubuntu ansible git ansible roles ios pull config tasks main yml ok ansible facts ansible connection connections compute connections compute local os family null os release null task task path home actionmystique program files ubuntu ansible git ansible roles ios pull config tasks main yml ok changed false invocation module args auth pass value specified in no log parameter authorize true commands host interval password value specified in no log parameter port provider username admin authorize true ssh keyfile ssh id rsa auth pass key type rsa host version auth retries timeout password port transport cli retries ssh keyfile root ssh id rsa timeout username admin waitfor null module name ios command stdout using out of bytes n n last configuration change at utc fri jul by admin n nversion nservice timestamps debug datetime msec nservice timestamps log datetime msec nservice password encryption n nhostname vios n nboot start marker nboot end marker n n nenable secret hnyp an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios command py line in main file tmp ansible ansible module ios command py line in main response module execute commands file tmp ansible ansible modlib zip ansible module utils ios py line in execute file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file usr local lib dist packages paramiko client py line in connect retry on signal lambda sock connect addr file usr local lib dist packages paramiko util py line in retry on signal return function file usr local lib dist packages paramiko client py line in retry on signal lambda sock connect addr file usr lib socket py line in meth return getattr self sock name args socket timeout timed out whole log ,1 1718,6574482976.0,IssuesEvent,2017-09-11 13:03:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,pip module doesn't use proper pip executable from virtualenv,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pip_module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /code/ansible.cfg configured module search path = Default w/o overrides ``` Could affect 2.1 also ##### OS / ENVIRONMENT CentOS 7 ##### SUMMARY pip module uses `/bin/pip2` instead of the one from the virtualenv when virtualenv is specified and is newly created. This result for example in that it unable to upgrade pip inside the virtualenv with the error ``` OSError: [Errno 13] Permission denied: '/usr/bin/pip' ``` because it tries to unlink the system pip. ##### STEPS TO REPRODUCE * Use target host with the old pip installed. * Try to upgrade pip to the latest state via pip module providing some new virtualenv path using unprivileged user having full access to the target virtualenv. * Notice system pip executable unlink error. ``` - name: ensure pip is latest version in venv pip: name: pip virtualenv: '{{ projects_path }}/venv' state: latest become_user: venvuser ``` ##### EXPECTED RESULTS pip module should use pip executable from the virtualenv provided via the module arg. ##### ACTUAL RESULTS ``` fatal: [hostname]: FAILED! => { ""changed"": false, ""cmd"": ""/bin/pip2 install -U pip"", ""failed"": true, ""invocation"": { ""module_args"": { ""chdir"": null, ""editable"": true, ""executable"": null, ""extra_args"": null, ""name"": [ ""pip"" ], ""requirements"": null, ""state"": ""latest"", ""umask"": null, ""use_mirrors"": true, ""version"": null, ""virtualenv"": ""/srv/www/hostname/venv"", ""virtualenv_command"": ""virtualenv"", ""virtualenv_python"": null, ""virtualenv_site_packages"": false }, ""module_name"": ""pip"" }, ""msg"": ""stdout: New python executable in /srv/www/hostname/venv/bin/python\nInstalling Setuptools..............................................................................................................................................................................................................................done.\nInstalling Pip.....................................................................................................................................................................................................................................................................................................................................done.\nCollecting pip\n Downloading pip-9.0.0-py2.py3-none-any.whl (1.3MB)\nInstalling collected packages: pip\n Found existing installation: pip 7.1.0\n Uninstalling pip-7.1.0:\n\n:stderr: You are using pip version 7.1.0, however version 9.0.0 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\nException:\nTraceback (most recent call last):\n File \""/usr/lib/python2.7/site-packages/pip/basecommand.py\"", line 223, in main\n status = self.run(options, args)\n File \""/usr/lib/python2.7/site-packages/pip/commands/install.py\"", line 308, in run\n strip_file_prefix=options.strip_file_prefix,\n File \""/usr/lib/python2.7/site-packages/pip/req/req_set.py\"", line 640, in install\n requirement.uninstall(auto_confirm=True)\n File \""/usr/lib/python2.7/site-packages/pip/req/req_install.py\"", line 726, in uninstall\n paths_to_remove.remove(auto_confirm)\n File \""/usr/lib/python2.7/site-packages/pip/req/req_uninstall.py\"", line 125, in remove\n renames(path, new_path)\n File \""/usr/lib/python2.7/site-packages/pip/utils/__init__.py\"", line 314, in renames\n shutil.move(old, new)\n File \""/usr/lib64/python2.7/shutil.py\"", line 302, in move\n os.unlink(src)\nOSError: [Errno 13] Permission denied: '/usr/bin/pip'\n\n"" } ``` ",True,"pip module doesn't use proper pip executable from virtualenv - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pip_module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /code/ansible.cfg configured module search path = Default w/o overrides ``` Could affect 2.1 also ##### OS / ENVIRONMENT CentOS 7 ##### SUMMARY pip module uses `/bin/pip2` instead of the one from the virtualenv when virtualenv is specified and is newly created. This result for example in that it unable to upgrade pip inside the virtualenv with the error ``` OSError: [Errno 13] Permission denied: '/usr/bin/pip' ``` because it tries to unlink the system pip. ##### STEPS TO REPRODUCE * Use target host with the old pip installed. * Try to upgrade pip to the latest state via pip module providing some new virtualenv path using unprivileged user having full access to the target virtualenv. * Notice system pip executable unlink error. ``` - name: ensure pip is latest version in venv pip: name: pip virtualenv: '{{ projects_path }}/venv' state: latest become_user: venvuser ``` ##### EXPECTED RESULTS pip module should use pip executable from the virtualenv provided via the module arg. ##### ACTUAL RESULTS ``` fatal: [hostname]: FAILED! => { ""changed"": false, ""cmd"": ""/bin/pip2 install -U pip"", ""failed"": true, ""invocation"": { ""module_args"": { ""chdir"": null, ""editable"": true, ""executable"": null, ""extra_args"": null, ""name"": [ ""pip"" ], ""requirements"": null, ""state"": ""latest"", ""umask"": null, ""use_mirrors"": true, ""version"": null, ""virtualenv"": ""/srv/www/hostname/venv"", ""virtualenv_command"": ""virtualenv"", ""virtualenv_python"": null, ""virtualenv_site_packages"": false }, ""module_name"": ""pip"" }, ""msg"": ""stdout: New python executable in /srv/www/hostname/venv/bin/python\nInstalling Setuptools..............................................................................................................................................................................................................................done.\nInstalling Pip.....................................................................................................................................................................................................................................................................................................................................done.\nCollecting pip\n Downloading pip-9.0.0-py2.py3-none-any.whl (1.3MB)\nInstalling collected packages: pip\n Found existing installation: pip 7.1.0\n Uninstalling pip-7.1.0:\n\n:stderr: You are using pip version 7.1.0, however version 9.0.0 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\nException:\nTraceback (most recent call last):\n File \""/usr/lib/python2.7/site-packages/pip/basecommand.py\"", line 223, in main\n status = self.run(options, args)\n File \""/usr/lib/python2.7/site-packages/pip/commands/install.py\"", line 308, in run\n strip_file_prefix=options.strip_file_prefix,\n File \""/usr/lib/python2.7/site-packages/pip/req/req_set.py\"", line 640, in install\n requirement.uninstall(auto_confirm=True)\n File \""/usr/lib/python2.7/site-packages/pip/req/req_install.py\"", line 726, in uninstall\n paths_to_remove.remove(auto_confirm)\n File \""/usr/lib/python2.7/site-packages/pip/req/req_uninstall.py\"", line 125, in remove\n renames(path, new_path)\n File \""/usr/lib/python2.7/site-packages/pip/utils/__init__.py\"", line 314, in renames\n shutil.move(old, new)\n File \""/usr/lib64/python2.7/shutil.py\"", line 302, in move\n os.unlink(src)\nOSError: [Errno 13] Permission denied: '/usr/bin/pip'\n\n"" } ``` ",1,pip module doesn t use proper pip executable from virtualenv issue type bug report component name pip module ansible version ansible config file code ansible cfg configured module search path default w o overrides could affect also os environment centos summary pip module uses bin instead of the one from the virtualenv when virtualenv is specified and is newly created this result for example in that it unable to upgrade pip inside the virtualenv with the error oserror permission denied usr bin pip because it tries to unlink the system pip steps to reproduce use target host with the old pip installed try to upgrade pip to the latest state via pip module providing some new virtualenv path using unprivileged user having full access to the target virtualenv notice system pip executable unlink error name ensure pip is latest version in venv pip name pip virtualenv projects path venv state latest become user venvuser expected results pip module should use pip executable from the virtualenv provided via the module arg actual results fatal failed changed false cmd bin install u pip failed true invocation module args chdir null editable true executable null extra args null name pip requirements null state latest umask null use mirrors true version null virtualenv srv www hostname venv virtualenv command virtualenv virtualenv python null virtualenv site packages false module name pip msg stdout new python executable in srv www hostname venv bin python ninstalling setuptools done ninstalling pip done ncollecting pip n downloading pip none any whl ninstalling collected packages pip n found existing installation pip n uninstalling pip n n stderr you are using pip version however version is available nyou should consider upgrading via the pip install upgrade pip command nexception ntraceback most recent call last n file usr lib site packages pip basecommand py line in main n status self run options args n file usr lib site packages pip commands install py line in run n strip file prefix options strip file prefix n file usr lib site packages pip req req set py line in install n requirement uninstall auto confirm true n file usr lib site packages pip req req install py line in uninstall n paths to remove remove auto confirm n file usr lib site packages pip req req uninstall py line in remove n renames path new path n file usr lib site packages pip utils init py line in renames n shutil move old new n file usr shutil py line in move n os unlink src noserror permission denied usr bin pip n n ,1 1067,4889235461.0,IssuesEvent,2016-11-18 09:31:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,include_vars - directory inconsistency,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_vars ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/user/project/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Inconsistent directory handling by include_vars module ##### STEPS TO REPRODUCE ``` structure: site.yml -> include: /playbooks/test.yml /playbooks/test.yml -> roles: test /roles/test/tasks/main.yml -> include_vars: ""{{ vars_file | default() }}"" ``` ##### EXPECTED RESULTS consistency ##### ACTUAL RESULTS ``` include_vars: ""{{ vars_file | default() }}"" fatal: [hostname]: FAILED! => {""failed"": true, ""msg"": ""the file_name '/home/user/project/roles/test/tasks' does not exist, or is not readable""} include_vars: ""{{ vars_file | default('../file.yml') }}"" fatal: [hostname]: FAILED! => {""changed"": false, ""failed"": true, ""file"": ""/home/user/project/file.yml"", ""invocation"": {""module_args"": {""_raw_params"": ""../file.yml""}, ""module_name"": ""include_vars""}, ""msg"": ""Source file not found.""} include_vars: ""{{ vars_file | default('file.yml') }}"" fatal: [hostname]: FAILED! => {""changed"": false, ""failed"": true, ""file"": ""/home/user/project/playbooks/file.yml"", ""invocation"": {""module_args"": {""_raw_params"": ""file.yml""}, ""module_name"": ""include_vars""}, ""msg"": ""Source file not found.""} works as expected with absolute path ``` ",True,"include_vars - directory inconsistency - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_vars ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/user/project/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Inconsistent directory handling by include_vars module ##### STEPS TO REPRODUCE ``` structure: site.yml -> include: /playbooks/test.yml /playbooks/test.yml -> roles: test /roles/test/tasks/main.yml -> include_vars: ""{{ vars_file | default() }}"" ``` ##### EXPECTED RESULTS consistency ##### ACTUAL RESULTS ``` include_vars: ""{{ vars_file | default() }}"" fatal: [hostname]: FAILED! => {""failed"": true, ""msg"": ""the file_name '/home/user/project/roles/test/tasks' does not exist, or is not readable""} include_vars: ""{{ vars_file | default('../file.yml') }}"" fatal: [hostname]: FAILED! => {""changed"": false, ""failed"": true, ""file"": ""/home/user/project/file.yml"", ""invocation"": {""module_args"": {""_raw_params"": ""../file.yml""}, ""module_name"": ""include_vars""}, ""msg"": ""Source file not found.""} include_vars: ""{{ vars_file | default('file.yml') }}"" fatal: [hostname]: FAILED! => {""changed"": false, ""failed"": true, ""file"": ""/home/user/project/playbooks/file.yml"", ""invocation"": {""module_args"": {""_raw_params"": ""file.yml""}, ""module_name"": ""include_vars""}, ""msg"": ""Source file not found.""} works as expected with absolute path ``` ",1,include vars directory inconsistency issue type bug report component name include vars ansible version ansible config file home user project ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary inconsistent directory handling by include vars module steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used structure site yml include playbooks test yml playbooks test yml roles test roles test tasks main yml include vars vars file default expected results consistency actual results include vars vars file default fatal failed failed true msg the file name home user project roles test tasks does not exist or is not readable include vars vars file default file yml fatal failed changed false failed true file home user project file yml invocation module args raw params file yml module name include vars msg source file not found include vars vars file default file yml fatal failed changed false failed true file home user project playbooks file yml invocation module args raw params file yml module name include vars msg source file not found works as expected with absolute path ,1 1838,6577373619.0,IssuesEvent,2017-09-12 00:27:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,change logging server on cisco router failed with Networking module ios_config or ios_template,affects_2.2 feature_idea networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - ios_config - ios_template ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel f961f9f4d0) last updated 2016/05/28 09:56:33 (GMT +800) lib/ansible/modules/core: (detached HEAD 90e8a36d4c) last updated 2016/05/28 10:23:02 (GMT +800) lib/ansible/modules/extras: (detached HEAD 0e4a023a7e) last updated 2016/05/28 10:23:37 (GMT +800) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### site.yml
- name: play show commands and change running-config on cisco ios device
  hosts:
  - cisco
  gather_facts: no
  connection: local
  roles:
  - cisco
##### tasks/main.yml

---
- name: define provider
  set_fact:
    provider:
      host: ""{{inventory_hostname}}""
      username: ""{{username}}""
      password: ""{{password}}""

- name: run show commands with ios_command
  ios_command:
    provider: ""{{provider}}""
    commands:
      - show running | include logging
  register: show_result

- debug: var=show_result.stdout_lines

- name: change syslog server with ios_config
  ios_config:
    provider: ""{{ provider }}""
    authorize: yes
    lines:
      - logging {{syslogsrv01}}
      - logging {{syslogsrv02}}
    replace: line
    match: line
    before:
      - no logging {{syslogsrv03}}
      - no logging {{syslogsrv04}}
    force: true

- name: run show running-config after change with ios_config
  ios_command:
    provider: ""{{ provider }}""
    commands:
      - show running | include logging
  register: after_change_syslog

- debug: var=after_change_syslog.stdout_lines

- name: write running-config to startup-config
  ios_command:
    provider: ""{{ provider }}""
    commands:
      - write
  register: write_output

- debug: var=write_output.stdout_lines
##### OS / ENVIRONMENT CentOS Linux release 7.1.1503 (Core) ##### SUMMARY I am using ansible to change hunders of router's logging destination. Some routers may already configured with syslogsrv03(1.1.1.3), others may configured with syslogsrv04(1.1.1.4), Now I want change all router's logging destination to logsrv01(1.1.1.1) and logsrv02(1.1.1.2), then remove original logging destination. ##### STEPS TO REPRODUCE ``` - name: change logging server with ios_config ios_config: provider: ""{{ provider }}"" authorize: yes lines: - logging {{syslogsrv01}} - logging {{syslogsrv02}} replace: line match: line before: - no logging {{syslogsrv03}} - no logging {{syslogsrv04}} force: true ``` when i login to router, and issue command _no logging 1.1.1.4_, following output produced:
r1(config)#no logging 1.1.1.4
Host 1.1.1.4 not found for logging
##### EXPECTED RESULTS Simply ignore and execute next command if logging server does not exist. When I use same steps to change ntp servers, everything works fine. ##### ACTUAL RESULTS ``` PLAY [play show commands and change running-config on cisco ios device] ******** TASK [cisco : define provider] ************************************************* ok: [r1] TASK [cisco : run show commands with ios_command] ****************************** ok: [r1] TASK [cisco : debug] *********************************************************** ok: [r1] => { ""show_result.stdout_lines"": [ [ ""logging buffered 512000"", ""cts logging verbose"", ""logging source-interface Loopback1"", ""logging host 1.1.1.3"", ] ] } TASK [cisco : change syslog server with ios_config] **************************** fatal: [r1]: FAILED! => {""changed"": false, ""commands"": [""configure terminal"", ""no logging 1.1.1.3"", ""no logging 1.1.1.4"", ""logging 1.1.1.1"", ""logging 1.1.1.2""], ""failed"": true, ""msg"": ""matched error in response: no logging 1.1.1.4\r\nHost 1.1.1.4 not found for logging\r\nr1(config)#""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @site.retry PLAY RECAP ********************************************************************* r1 : ok=3 changed=0 unreachable=0 failed=1 ``` ",True,"change logging server on cisco router failed with Networking module ios_config or ios_template - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - ios_config - ios_template ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel f961f9f4d0) last updated 2016/05/28 09:56:33 (GMT +800) lib/ansible/modules/core: (detached HEAD 90e8a36d4c) last updated 2016/05/28 10:23:02 (GMT +800) lib/ansible/modules/extras: (detached HEAD 0e4a023a7e) last updated 2016/05/28 10:23:37 (GMT +800) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### site.yml
- name: play show commands and change running-config on cisco ios device
  hosts:
  - cisco
  gather_facts: no
  connection: local
  roles:
  - cisco
##### tasks/main.yml

---
- name: define provider
  set_fact:
    provider:
      host: ""{{inventory_hostname}}""
      username: ""{{username}}""
      password: ""{{password}}""

- name: run show commands with ios_command
  ios_command:
    provider: ""{{provider}}""
    commands:
      - show running | include logging
  register: show_result

- debug: var=show_result.stdout_lines

- name: change syslog server with ios_config
  ios_config:
    provider: ""{{ provider }}""
    authorize: yes
    lines:
      - logging {{syslogsrv01}}
      - logging {{syslogsrv02}}
    replace: line
    match: line
    before:
      - no logging {{syslogsrv03}}
      - no logging {{syslogsrv04}}
    force: true

- name: run show running-config after change with ios_config
  ios_command:
    provider: ""{{ provider }}""
    commands:
      - show running | include logging
  register: after_change_syslog

- debug: var=after_change_syslog.stdout_lines

- name: write running-config to startup-config
  ios_command:
    provider: ""{{ provider }}""
    commands:
      - write
  register: write_output

- debug: var=write_output.stdout_lines
##### OS / ENVIRONMENT CentOS Linux release 7.1.1503 (Core) ##### SUMMARY I am using ansible to change hunders of router's logging destination. Some routers may already configured with syslogsrv03(1.1.1.3), others may configured with syslogsrv04(1.1.1.4), Now I want change all router's logging destination to logsrv01(1.1.1.1) and logsrv02(1.1.1.2), then remove original logging destination. ##### STEPS TO REPRODUCE ``` - name: change logging server with ios_config ios_config: provider: ""{{ provider }}"" authorize: yes lines: - logging {{syslogsrv01}} - logging {{syslogsrv02}} replace: line match: line before: - no logging {{syslogsrv03}} - no logging {{syslogsrv04}} force: true ``` when i login to router, and issue command _no logging 1.1.1.4_, following output produced:
r1(config)#no logging 1.1.1.4
Host 1.1.1.4 not found for logging
##### EXPECTED RESULTS Simply ignore and execute next command if logging server does not exist. When I use same steps to change ntp servers, everything works fine. ##### ACTUAL RESULTS ``` PLAY [play show commands and change running-config on cisco ios device] ******** TASK [cisco : define provider] ************************************************* ok: [r1] TASK [cisco : run show commands with ios_command] ****************************** ok: [r1] TASK [cisco : debug] *********************************************************** ok: [r1] => { ""show_result.stdout_lines"": [ [ ""logging buffered 512000"", ""cts logging verbose"", ""logging source-interface Loopback1"", ""logging host 1.1.1.3"", ] ] } TASK [cisco : change syslog server with ios_config] **************************** fatal: [r1]: FAILED! => {""changed"": false, ""commands"": [""configure terminal"", ""no logging 1.1.1.3"", ""no logging 1.1.1.4"", ""logging 1.1.1.1"", ""logging 1.1.1.2""], ""failed"": true, ""msg"": ""matched error in response: no logging 1.1.1.4\r\nHost 1.1.1.4 not found for logging\r\nr1(config)#""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @site.retry PLAY RECAP ********************************************************************* r1 : ok=3 changed=0 unreachable=0 failed=1 ``` ",1,change logging server on cisco router failed with networking module ios config or ios template issue type bug report component name ios config ios template ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables site yml name play show commands and change running config on cisco ios device hosts cisco gather facts no connection local roles cisco tasks main yml name define provider set fact provider host inventory hostname username username password password name run show commands with ios command ios command provider provider commands show running include logging register show result debug var show result stdout lines name change syslog server with ios config ios config provider provider authorize yes lines logging logging replace line match line before no logging no logging force true name run show running config after change with ios config ios command provider provider commands show running include logging register after change syslog debug var after change syslog stdout lines name write running config to startup config ios command provider provider commands write register write output debug var write output stdout lines os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific centos linux release core summary i am using ansible to change hunders of router s logging destination some routers may already configured with others may configured with now i want change all router s logging destination to and then remove original logging destination steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name change logging server with ios config ios config provider provider authorize yes lines logging logging replace line match line before no logging no logging force true when i login to router and issue command no logging following output produced config no logging host not found for logging expected results simply ignore and execute next command if logging server does not exist when i use same steps to change ntp servers everything works fine actual results play task ok task ok task ok show result stdout lines logging buffered cts logging verbose logging source interface logging host task fatal failed changed false commands failed true msg matched error in response no logging r nhost not found for logging r config no more hosts left to retry use limit site retry play recap ok changed unreachable failed ,1 920,4622139259.0,IssuesEvent,2016-09-27 06:04:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service: timeout option not respected,affects_2.1 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### SUMMARY The `timeout` option specified in a docker_service task seems to be not actually used. ##### STEPS TO REPRODUCE SEE https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_service.py#L837-L862 ",True,"docker_service: timeout option not respected - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### SUMMARY The `timeout` option specified in a docker_service task seems to be not actually used. ##### STEPS TO REPRODUCE SEE https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_service.py#L837-L862 ",1,docker service timeout option not respected issue type bug report component name docker service ansible version ansible summary the timeout option specified in a docker service task seems to be not actually used steps to reproduce see ,1 1824,6577335030.0,IssuesEvent,2017-09-12 00:11:21,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,postgresql_db owner for tables and views,affects_2.0 bug_report feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Bug - Feature Idea ##### COMPONENT NAME postgresql_db ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/toga/PycharmProjects/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg ``` [defaults] inventory = hosts ask_vault_pass=True [privilege_escalation] become=True become_ask_pass=True ``` ##### OS / ENVIRONMENT ``` Ubuntu 14.04.4 LTS virtualenv with ansible PostgreSQL 9.3 ``` ##### SUMMARY The module `postgresql_db` should recursively set the database owner for all included tables, views and sequences. `ALTER DATABASE %s OWNER TO %s` only affects the database but not the tables in the database. After restoring a dump with the postgres user, the owner of all tables where set to postgres and won't be reassigned to the user named in the task. ##### STEPS TO REPRODUCE Connect to psql ``` CREATE USER test; CREATE DATABASE test OWNER postgres; \c test CREATE TABLE films ( code char(5) CONSTRAINT firstkey PRIMARY KEY, title varchar(40) NOT NULL, did integer NOT NULL, date_prod date, kind varchar(10), len interval hour to minute ); ``` User ansible ``` ansible localhost --connection=local -b --become-user postgres -m postgresql_db -a ""name=test owner=test"" ``` ##### EXPECTED RESULTS ``` test=# \l Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- test | test | UTF8 | de_DE.UTF-8 | de_DE.UTF-8 | =Tc/test2 + | | | | | test2=CTc/test2 test=# \d List of relations Schema | Name | Type | Owner --------+-------+-------+---------- public | films | table | test (1 row) ``` ##### ACTUAL RESULTS ``` test=# \l Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- test | test | UTF8 | de_DE.UTF-8 | de_DE.UTF-8 | =Tc/test2 + | | | | | test2=CTc/test2 test=# \d List of relations Schema | Name | Type | Owner --------+-------+-------+---------- public | films | table | postgres (1 row) ``` ",True,"postgresql_db owner for tables and views - ##### ISSUE TYPE - Bug - Feature Idea ##### COMPONENT NAME postgresql_db ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/toga/PycharmProjects/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg ``` [defaults] inventory = hosts ask_vault_pass=True [privilege_escalation] become=True become_ask_pass=True ``` ##### OS / ENVIRONMENT ``` Ubuntu 14.04.4 LTS virtualenv with ansible PostgreSQL 9.3 ``` ##### SUMMARY The module `postgresql_db` should recursively set the database owner for all included tables, views and sequences. `ALTER DATABASE %s OWNER TO %s` only affects the database but not the tables in the database. After restoring a dump with the postgres user, the owner of all tables where set to postgres and won't be reassigned to the user named in the task. ##### STEPS TO REPRODUCE Connect to psql ``` CREATE USER test; CREATE DATABASE test OWNER postgres; \c test CREATE TABLE films ( code char(5) CONSTRAINT firstkey PRIMARY KEY, title varchar(40) NOT NULL, did integer NOT NULL, date_prod date, kind varchar(10), len interval hour to minute ); ``` User ansible ``` ansible localhost --connection=local -b --become-user postgres -m postgresql_db -a ""name=test owner=test"" ``` ##### EXPECTED RESULTS ``` test=# \l Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- test | test | UTF8 | de_DE.UTF-8 | de_DE.UTF-8 | =Tc/test2 + | | | | | test2=CTc/test2 test=# \d List of relations Schema | Name | Type | Owner --------+-------+-------+---------- public | films | table | test (1 row) ``` ##### ACTUAL RESULTS ``` test=# \l Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- test | test | UTF8 | de_DE.UTF-8 | de_DE.UTF-8 | =Tc/test2 + | | | | | test2=CTc/test2 test=# \d List of relations Schema | Name | Type | Owner --------+-------+-------+---------- public | films | table | postgres (1 row) ``` ",1,postgresql db owner for tables and views issue type bug feature idea component name postgresql db ansible version ansible config file home toga pycharmprojects ansible ansible cfg configured module search path default w o overrides configuration ansible cfg inventory hosts ask vault pass true become true become ask pass true os environment ubuntu lts virtualenv with ansible postgresql summary the module postgresql db should recursively set the database owner for all included tables views and sequences alter database s owner to s only affects the database but not the tables in the database after restoring a dump with the postgres user the owner of all tables where set to postgres and won t be reassigned to the user named in the task steps to reproduce connect to psql create user test create database test owner postgres c test create table films code char constraint firstkey primary key title varchar not null did integer not null date prod date kind varchar len interval hour to minute user ansible ansible localhost connection local b become user postgres m postgresql db a name test owner test expected results test l name owner encoding collate ctype access privileges test test de de utf de de utf tc ctc test d list of relations schema name type owner public films table test row actual results test l name owner encoding collate ctype access privileges test test de de utf de de utf tc ctc test d list of relations schema name type owner public films table postgres row ,1 882,4543471170.0,IssuesEvent,2016-09-10 05:00:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container not preserving case for boolean environmental variables,affects_2.1 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default configuration is being used. ##### OS / ENVIRONMENT Running on Arch Linux, managing Ubuntu ##### SUMMARY When passing an environmental variable to docker using docker_container, a key with a value of _true_ will be passed to Docker as _True_ instead of _true_. Placing _true_ in quotes fixes this issue. ##### STEPS TO REPRODUCE ``` - name: Restart gitlab docker_container: name: gitlab image: sameersbn/gitlab state: started recreate: yes pull: true ports: - ""443:443"" - ""80:80"" - ""22:22"" links: - ""redis:redisio"" - ""postgresql:postgresql"" env: GITLAB_HTTPS: ""true"" # Works properly GITLAB_HTTPS: true # Passes True to Docker instead of true ``` ##### EXPECTED RESULTS The GITLAB_HTTPS variable should be set to _true_ ##### ACTUAL RESULTS The GITLAB_HTTPS variable is set to _True_, breaking the container's configuration. ``` ``` ",True,"docker_container not preserving case for boolean environmental variables - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default configuration is being used. ##### OS / ENVIRONMENT Running on Arch Linux, managing Ubuntu ##### SUMMARY When passing an environmental variable to docker using docker_container, a key with a value of _true_ will be passed to Docker as _True_ instead of _true_. Placing _true_ in quotes fixes this issue. ##### STEPS TO REPRODUCE ``` - name: Restart gitlab docker_container: name: gitlab image: sameersbn/gitlab state: started recreate: yes pull: true ports: - ""443:443"" - ""80:80"" - ""22:22"" links: - ""redis:redisio"" - ""postgresql:postgresql"" env: GITLAB_HTTPS: ""true"" # Works properly GITLAB_HTTPS: true # Passes True to Docker instead of true ``` ##### EXPECTED RESULTS The GITLAB_HTTPS variable should be set to _true_ ##### ACTUAL RESULTS The GITLAB_HTTPS variable is set to _True_, breaking the container's configuration. ``` ``` ",1,docker container not preserving case for boolean environmental variables issue type bug report component name docker container ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default configuration is being used os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running on arch linux managing ubuntu summary when passing an environmental variable to docker using docker container a key with a value of true will be passed to docker as true instead of true placing true in quotes fixes this issue steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name restart gitlab docker container name gitlab image sameersbn gitlab state started recreate yes pull true ports links redis redisio postgresql postgresql env gitlab https true works properly gitlab https true passes true to docker instead of true expected results the gitlab https variable should be set to true actual results the gitlab https variable is set to true breaking the container s configuration ,1 1201,5133079610.0,IssuesEvent,2017-01-11 01:42:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_elb_lb is not always idempotent when updating healthchecks,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ##### SUMMARY ELB healthcheck changes are sometimes ignored. ##### STEPS TO REPRODUCE Create elb with tcp healthcheck on port 80 with the module. Run the module again, except using a different port. ##### EXPECTED RESULTS The healthcheck should be updated with the new target host:port ##### ACTUAL RESULTS Invocation of the module ``` ""invocation"": { ""module_args"": { ""access_logs"": null, ""aws_access_key"": null, ""aws_secret_key"": null, ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": true, ""ec2_url"": null, ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""ping_port"": ""45784"", ""ping_protocol"": ""tcp"", ""response_timeout"": 2, ""unhealthy_threshold"": 10 }, ""idle_timeout"": null, ""instance_ids"": null, ""listeners"": [ { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 80, ""protocol"": ""http"" }, { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 443, ""protocol"": ""https"", ""ssl_certificate_id"": ""cert_id"" }, { ""instance_port"": ""60047"", ""instance_protocol"": ""tcp"", ""load_balancer_port"": 8000, ""protocol"": ""ssl"", ""ssl_certificate_id"": ""cert_id"" } ], ""name"": ""elbName"", ""profile"": null, ""purge_instance_ids"": false, ""purge_listeners"": true, ""purge_subnets"": false, ""purge_zones"": false, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": null, ""security_group_names"": [ ""sg1"", ""sg2"" ], ""security_token"": null, ""state"": ""present"", ""stickiness"": null, ""subnets"": [ ""subnet-1234a"", ""subnet-1234b"", ""subnet-1234c"" ], ""tags"": null, ""validate_certs"": true, ""wait"": false, ""wait_timeout"": 60, ""zones"": null }, ""module_name"": ""ec2_elb_lb"" }, ""item"": { ""service1"": { ""host_port"": ""50765"", ""task"": { ""name"": ""taskName"" } }, ""service2"": { ""host_port"": ""60047"" }, ""service3"": { ""host_port"": ""45784"", ""ssl_certificate_id"": ""cert_id"", ""task"": { ""name"": ""taskName"" } }, ""name"": { ""suffix"": ""lend"" }, ""service"": { ""name"": ""serviceName"" } } ``` output from module ``` ""elb"": { ""app_cookie_policy"": null, ""backends"": [ ], ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": ""yes"", ""dns_name"": ""dns_name"", ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""target"": ""TCP:52359"", ""timeout"": 2, ""unhealthy_threshold"": 10 }, ""hosted_zone_id"": ""hostedZoneId"", ""hosted_zone_name"": null, ""idle_timeout"": 60, ""in_service_count"": 1, ""instance_health"": [ { ""instance_id"": ""i-250534bf"", ""reason_code"": ""N/A"", ""state"": ""InService"" }, { ""instance_id"": ""i-8e68ebcb"", ""reason_code"": ""Instance"", ""state"": ""OutOfService"" } ], ""instances"": [ ""i-250534bf"" ], ""lb_cookie_policy"": null, ""listeners"": [ [ 8000, 60047, ""SSL"", ""TCP"", ""cert_id"" ], [ 80, 45784, ""HTTP"", ""HTTP"" ], [ 443, 45784, ""HTTPS"", ""HTTP"", ""cert_id"" ] ], ""name"": ""elbName"", ""out_of_service_count"": 1, ""proxy_policy"": null, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": [ ""sg1"", ""sg2"" ], ""status"": ""ok"", ""subnets"": [ ""subnet-1"", ""subnet-2"", ""subnet-3"" ], ""tags"": null, ""unknown_instance_state_count"": 0, ""zones"": [ ""us-east-1a"", ""us-east-1b"", ""us-east-1c"" ] } ``` Note the `ping_port` from the invocation does not match what is in the `target:port` from the healthcheck property of the module output. Seems to be ignored in this run. ",True,"ec2_elb_lb is not always idempotent when updating healthchecks - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ##### SUMMARY ELB healthcheck changes are sometimes ignored. ##### STEPS TO REPRODUCE Create elb with tcp healthcheck on port 80 with the module. Run the module again, except using a different port. ##### EXPECTED RESULTS The healthcheck should be updated with the new target host:port ##### ACTUAL RESULTS Invocation of the module ``` ""invocation"": { ""module_args"": { ""access_logs"": null, ""aws_access_key"": null, ""aws_secret_key"": null, ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": true, ""ec2_url"": null, ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""ping_port"": ""45784"", ""ping_protocol"": ""tcp"", ""response_timeout"": 2, ""unhealthy_threshold"": 10 }, ""idle_timeout"": null, ""instance_ids"": null, ""listeners"": [ { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 80, ""protocol"": ""http"" }, { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 443, ""protocol"": ""https"", ""ssl_certificate_id"": ""cert_id"" }, { ""instance_port"": ""60047"", ""instance_protocol"": ""tcp"", ""load_balancer_port"": 8000, ""protocol"": ""ssl"", ""ssl_certificate_id"": ""cert_id"" } ], ""name"": ""elbName"", ""profile"": null, ""purge_instance_ids"": false, ""purge_listeners"": true, ""purge_subnets"": false, ""purge_zones"": false, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": null, ""security_group_names"": [ ""sg1"", ""sg2"" ], ""security_token"": null, ""state"": ""present"", ""stickiness"": null, ""subnets"": [ ""subnet-1234a"", ""subnet-1234b"", ""subnet-1234c"" ], ""tags"": null, ""validate_certs"": true, ""wait"": false, ""wait_timeout"": 60, ""zones"": null }, ""module_name"": ""ec2_elb_lb"" }, ""item"": { ""service1"": { ""host_port"": ""50765"", ""task"": { ""name"": ""taskName"" } }, ""service2"": { ""host_port"": ""60047"" }, ""service3"": { ""host_port"": ""45784"", ""ssl_certificate_id"": ""cert_id"", ""task"": { ""name"": ""taskName"" } }, ""name"": { ""suffix"": ""lend"" }, ""service"": { ""name"": ""serviceName"" } } ``` output from module ``` ""elb"": { ""app_cookie_policy"": null, ""backends"": [ ], ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": ""yes"", ""dns_name"": ""dns_name"", ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""target"": ""TCP:52359"", ""timeout"": 2, ""unhealthy_threshold"": 10 }, ""hosted_zone_id"": ""hostedZoneId"", ""hosted_zone_name"": null, ""idle_timeout"": 60, ""in_service_count"": 1, ""instance_health"": [ { ""instance_id"": ""i-250534bf"", ""reason_code"": ""N/A"", ""state"": ""InService"" }, { ""instance_id"": ""i-8e68ebcb"", ""reason_code"": ""Instance"", ""state"": ""OutOfService"" } ], ""instances"": [ ""i-250534bf"" ], ""lb_cookie_policy"": null, ""listeners"": [ [ 8000, 60047, ""SSL"", ""TCP"", ""cert_id"" ], [ 80, 45784, ""HTTP"", ""HTTP"" ], [ 443, 45784, ""HTTPS"", ""HTTP"", ""cert_id"" ] ], ""name"": ""elbName"", ""out_of_service_count"": 1, ""proxy_policy"": null, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": [ ""sg1"", ""sg2"" ], ""status"": ""ok"", ""subnets"": [ ""subnet-1"", ""subnet-2"", ""subnet-3"" ], ""tags"": null, ""unknown_instance_state_count"": 0, ""zones"": [ ""us-east-1a"", ""us-east-1b"", ""us-east-1c"" ] } ``` Note the `ping_port` from the invocation does not match what is in the `target:port` from the healthcheck property of the module output. Seems to be ignored in this run. ",1, elb lb is not always idempotent when updating healthchecks issue type bug report component name elb lb ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment n a aws summary elb healthcheck changes are sometimes ignored steps to reproduce create elb with tcp healthcheck on port with the module run the module again except using a different port expected results the healthcheck should be updated with the new target host port actual results invocation of the module invocation module args access logs null aws access key null aws secret key null connection draining timeout cross az load balancing true url null health check healthy threshold interval ping port ping protocol tcp response timeout unhealthy threshold idle timeout null instance ids null listeners instance port instance protocol http load balancer port protocol http instance port instance protocol http load balancer port protocol https ssl certificate id cert id instance port instance protocol tcp load balancer port protocol ssl ssl certificate id cert id name elbname profile null purge instance ids false purge listeners true purge subnets false purge zones false region us east scheme internal security group ids null security group names security token null state present stickiness null subnets subnet subnet subnet tags null validate certs true wait false wait timeout zones null module name elb lb item host port task name taskname host port host port ssl certificate id cert id task name taskname name suffix lend service name servicename output from module elb app cookie policy null backends connection draining timeout cross az load balancing yes dns name dns name health check healthy threshold interval target tcp timeout unhealthy threshold hosted zone id hostedzoneid hosted zone name null idle timeout in service count instance health instance id i reason code n a state inservice instance id i reason code instance state outofservice instances i lb cookie policy null listeners ssl tcp cert id http http https http cert id name elbname out of service count proxy policy null region us east scheme internal security group ids status ok subnets subnet subnet subnet tags null unknown instance state count zones us east us east us east note the ping port from the invocation does not match what is in the target port from the healthcheck property of the module output seems to be ignored in this run ,1 1304,5542120510.0,IssuesEvent,2017-03-22 14:25:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,lineinfile module : insertbefore should insert before first match of specified regular expression,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY insertbefore : If specified, the line will be inserted before the last match of specified regular expression insertafter : If specified, the line will be inserted after the last match of specified regular expression It seems to me that insertbefore should insert before the first match of the specified regexp. This would be consistent with the way that insertafter works (the reverse). In our example we are adding lines to sshd_config that need to go before any ^Match blocks. If there are multiple match blocks then insertbefore won't create a valid sshd_config ##### STEPS TO REPRODUCE `````` lineinfile: dest=/etc/ssh/sshd_config regexp=""^#?PermitUserEnvironment"" line=""PermitUserEnvironment no"" insertbefore=""^Match"" state=present``` `````` ##### EXPECTED RESULTS If more than one Match block is in /etc/ssh/sshd_config, the line should be inserted before the first. ##### ACTUAL RESULTS ssh throws an error: ``` /etc/ssh/sshd_config line 94: Directive 'PermitUserEnvironment' is not allowed within a Match block ``` ",True,"lineinfile module : insertbefore should insert before first match of specified regular expression - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY insertbefore : If specified, the line will be inserted before the last match of specified regular expression insertafter : If specified, the line will be inserted after the last match of specified regular expression It seems to me that insertbefore should insert before the first match of the specified regexp. This would be consistent with the way that insertafter works (the reverse). In our example we are adding lines to sshd_config that need to go before any ^Match blocks. If there are multiple match blocks then insertbefore won't create a valid sshd_config ##### STEPS TO REPRODUCE `````` lineinfile: dest=/etc/ssh/sshd_config regexp=""^#?PermitUserEnvironment"" line=""PermitUserEnvironment no"" insertbefore=""^Match"" state=present``` `````` ##### EXPECTED RESULTS If more than one Match block is in /etc/ssh/sshd_config, the line should be inserted before the first. ##### ACTUAL RESULTS ssh throws an error: ``` /etc/ssh/sshd_config line 94: Directive 'PermitUserEnvironment' is not allowed within a Match block ``` ",1,lineinfile module insertbefore should insert before first match of specified regular expression issue type bug report component name lineinfile module ansible version ansible configuration os environment ubuntu summary insertbefore if specified the line will be inserted before the last match of specified regular expression insertafter if specified the line will be inserted after the last match of specified regular expression it seems to me that insertbefore should insert before the first match of the specified regexp this would be consistent with the way that insertafter works the reverse in our example we are adding lines to sshd config that need to go before any match blocks if there are multiple match blocks then insertbefore won t create a valid sshd config steps to reproduce lineinfile dest etc ssh sshd config regexp permituserenvironment line permituserenvironment no insertbefore match state present expected results if more than one match block is in etc ssh sshd config the line should be inserted before the first actual results ssh throws an error etc ssh sshd config line directive permituserenvironment is not allowed within a match block ,1 1461,6338831979.0,IssuesEvent,2017-07-27 06:24:12,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Adding `\n` at the end of a job in the cron module breaks idempotence,affects_2.1 bug_report waiting_on_maintainer,"#2316 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cron_module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Running with `retry_files_enabled = False` ##### OS / ENVIRONMENT Debian testing running playbooks to a Debian squeeze machine ##### SUMMARY Adding `\n` at the end of a job in the cron module breaks idempotence. I was trying this to try to fix #2316 ##### STEPS TO REPRODUCE ``` - name: cron cron: name=""foo"" hour=4 user=""root"" job=""foobar\n#"" cron_file=ansible_foobar ``` ##### EXPECTED RESULTS The cron file create should be: ``` * 4 * * * root foobar # ``` And stay like this each time the playbook is ran ##### ACTUAL RESULTS The cron file created is at first: ``` * 4 * * * root foobar # ``` Then after running the playbook a second time it becomes: ``` * 4 * * * root foobar # # ``` And so on. ",True,"Adding `\n` at the end of a job in the cron module breaks idempotence - #2316 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cron_module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Running with `retry_files_enabled = False` ##### OS / ENVIRONMENT Debian testing running playbooks to a Debian squeeze machine ##### SUMMARY Adding `\n` at the end of a job in the cron module breaks idempotence. I was trying this to try to fix #2316 ##### STEPS TO REPRODUCE ``` - name: cron cron: name=""foo"" hour=4 user=""root"" job=""foobar\n#"" cron_file=ansible_foobar ``` ##### EXPECTED RESULTS The cron file create should be: ``` * 4 * * * root foobar # ``` And stay like this each time the playbook is ran ##### ACTUAL RESULTS The cron file created is at first: ``` * 4 * * * root foobar # ``` Then after running the playbook a second time it becomes: ``` * 4 * * * root foobar # # ``` And so on. ",1,adding n at the end of a job in the cron module breaks idempotence issue type bug report component name cron module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables running with retry files enabled false os environment debian testing running playbooks to a debian squeeze machine summary adding n at the end of a job in the cron module breaks idempotence i was trying this to try to fix steps to reproduce create a new playbook make this playbook only create a cron job in cron d make sure the job finishes with a newline and something after it like n or nfoobar run the playbook multiple times observe the cron being in the changed state each time name cron cron name foo hour user root job foobar n cron file ansible foobar expected results the cron file create should be root foobar and stay like this each time the playbook is ran actual results the cron file created is at first root foobar then after running the playbook a second time it becomes root foobar and so on ,1 1828,6577346514.0,IssuesEvent,2017-09-12 00:16:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,template and copy modules should have state parameter,affects_2.1 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME copy template ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY As was suggested in https://github.com/ansible/ansible/issues/6929, I am opening new issue on this matter Please add 'state' parameter for template and copy modules, so items could be removed with the same task by using condition or variable override. ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS Something like this should be possible ``` - name: govern foo template template: src=foo dest=/etc/foo state=""{% if enable_foo == true %}present{% else %}absent{% endif%}"" ``` On second thought, file, copy, and template modules could be merged together into file module to make some cool things like: ``` - name: linked template file: template=foo.conf state=link src=../bar/bar.conf dest=/etc/foo/foo.conf # makes link /etc/foo/foo.conf > /etc/bar/bar.conf, # templates /etc/foo/foo.conf from foo.conf effectively writing to /etc/bar/bar.conf. ``` ##### ACTUAL RESULTS ``` ``` ",True,"template and copy modules should have state parameter - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME copy template ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY As was suggested in https://github.com/ansible/ansible/issues/6929, I am opening new issue on this matter Please add 'state' parameter for template and copy modules, so items could be removed with the same task by using condition or variable override. ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS Something like this should be possible ``` - name: govern foo template template: src=foo dest=/etc/foo state=""{% if enable_foo == true %}present{% else %}absent{% endif%}"" ``` On second thought, file, copy, and template modules could be merged together into file module to make some cool things like: ``` - name: linked template file: template=foo.conf state=link src=../bar/bar.conf dest=/etc/foo/foo.conf # makes link /etc/foo/foo.conf > /etc/bar/bar.conf, # templates /etc/foo/foo.conf from foo.conf effectively writing to /etc/bar/bar.conf. ``` ##### ACTUAL RESULTS ``` ``` ",1,template and copy modules should have state parameter issue type feature idea component name copy template ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary as was suggested in i am opening new issue on this matter please add state parameter for template and copy modules so items could be removed with the same task by using condition or variable override steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results something like this should be possible name govern foo template template src foo dest etc foo state if enable foo true present else absent endif on second thought file copy and template modules could be merged together into file module to make some cool things like name linked template file template foo conf state link src bar bar conf dest etc foo foo conf makes link etc foo foo conf etc bar bar conf templates etc foo foo conf from foo conf effectively writing to etc bar bar conf actual results ,1 1830,6577356695.0,IssuesEvent,2017-09-12 00:20:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"nxos_vlan returns ""Command does not support JSON output""",affects_2.1 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_vlan ##### ANSIBLE VERSION ``` vagrant@precise32:/vagrant$ ansible --version ansible 2.1.0.0 config file = /vagrant/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg ask_pass = False gathering = explicit roles_path = /vagrant/roles/ ##### OS / ENVIRONMENT Running from vagrant precise32 Managing Cisco Nexus 3172 Chassis; System version: 6.0(2)U5(2) ##### SUMMARY nxos_vlan module returns ""Command does not support JSON output"" however the vlans are added to the device ##### STEPS TO REPRODUCE Example role ``` - name: Vlan configuration nxos_vlan: admin_state: ""{{ item.admin_state | default(omit) }}"" host: ""{{ inventory_hostname }}"" name: ""{{ item.name | default(omit) }}"" password: ""{{ cisco.nexus.password }}"" port: ""{{ item.port | default(omit) }}"" provider: ""{{ provider | default(omit) }}"" ssh_keyfile: ""{{ ssh_keyfile | default(omit) }}"" state: ""{{ item.state | default(omit) }}"" transport: ""{{ transport | default('cli') }}"" use_ssl: ""{{ use_ssl | default(omit) }}"" username: ""{{ cisco.nexus.username }}"" vlan_id: ""{{ item.vlan_id | default(omit) }}"" vlan_range: ""{{ item.vlan_range | default(omit) }}"" vlan_state: ""{{ item.vlan_state | default(omit) }}"" with_items: ""{{ vlans }}"" ``` Example group_vars: ``` vlans: - vlan_id: 500 name: clbv2_vm_mgmt state: present - vlan_id: 600 name: clbv2_vm_snet state: present ``` ##### EXPECTED RESULTS I expected the vlans to be added and the module to return 'changed'. Subsequent runs return 'ok' ##### ACTUAL RESULTS ``` vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab PLAY [all] ********************************************************************* TASK [vlan : Include nxos vlan tasks] ****************************************** included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32 TASK [vlan : Vlan configuration] *********************************************** failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {""command"": ""show vlan | json"", ""failed"": true, ""item"": {""name"": ""clbv2_vm_mgmt"", ""state"": ""present"", ""vlan_id"": 500}, ""msg"": ""Command does not support JSON output""} failed: [10.127.49.31] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {""command"": ""show vlan | json"", ""failed"": true, ""item"": {""name"": ""clbv2_vm_mgmt"", ""state"": ""present"", ""vlan_id"": 500}, ""msg"": ""Command does not support JSON output""} failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) => {""command"": ""show vlan | json"", ""failed"": true, ""item"": {""name"": ""clbv2_vm_snet"", ""state"": ""present"", ""vlan_id"": 600}, ""msg"": ""Command does not support JSON output""} changed: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) to retry, use: --limit @vlan_test.retry PLAY RECAP ********************************************************************* 10.127.49.31 : ok=1 changed=0 unreachable=0 failed=1 10.127.49.32 : ok=1 changed=0 unreachable=0 failed=1 vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab PLAY [all] ********************************************************************* TASK [vlan : Include nxos vlan tasks] ****************************************** included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32 TASK [vlan : Vlan configuration] *********************************************** ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) PLAY RECAP ********************************************************************* 10.127.49.31 : ok=2 changed=0 unreachable=0 failed=0 10.127.49.32 : ok=2 changed=0 unreachable=0 failed=0 ``` ",True,"nxos_vlan returns ""Command does not support JSON output"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_vlan ##### ANSIBLE VERSION ``` vagrant@precise32:/vagrant$ ansible --version ansible 2.1.0.0 config file = /vagrant/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg ask_pass = False gathering = explicit roles_path = /vagrant/roles/ ##### OS / ENVIRONMENT Running from vagrant precise32 Managing Cisco Nexus 3172 Chassis; System version: 6.0(2)U5(2) ##### SUMMARY nxos_vlan module returns ""Command does not support JSON output"" however the vlans are added to the device ##### STEPS TO REPRODUCE Example role ``` - name: Vlan configuration nxos_vlan: admin_state: ""{{ item.admin_state | default(omit) }}"" host: ""{{ inventory_hostname }}"" name: ""{{ item.name | default(omit) }}"" password: ""{{ cisco.nexus.password }}"" port: ""{{ item.port | default(omit) }}"" provider: ""{{ provider | default(omit) }}"" ssh_keyfile: ""{{ ssh_keyfile | default(omit) }}"" state: ""{{ item.state | default(omit) }}"" transport: ""{{ transport | default('cli') }}"" use_ssl: ""{{ use_ssl | default(omit) }}"" username: ""{{ cisco.nexus.username }}"" vlan_id: ""{{ item.vlan_id | default(omit) }}"" vlan_range: ""{{ item.vlan_range | default(omit) }}"" vlan_state: ""{{ item.vlan_state | default(omit) }}"" with_items: ""{{ vlans }}"" ``` Example group_vars: ``` vlans: - vlan_id: 500 name: clbv2_vm_mgmt state: present - vlan_id: 600 name: clbv2_vm_snet state: present ``` ##### EXPECTED RESULTS I expected the vlans to be added and the module to return 'changed'. Subsequent runs return 'ok' ##### ACTUAL RESULTS ``` vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab PLAY [all] ********************************************************************* TASK [vlan : Include nxos vlan tasks] ****************************************** included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32 TASK [vlan : Vlan configuration] *********************************************** failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {""command"": ""show vlan | json"", ""failed"": true, ""item"": {""name"": ""clbv2_vm_mgmt"", ""state"": ""present"", ""vlan_id"": 500}, ""msg"": ""Command does not support JSON output""} failed: [10.127.49.31] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {""command"": ""show vlan | json"", ""failed"": true, ""item"": {""name"": ""clbv2_vm_mgmt"", ""state"": ""present"", ""vlan_id"": 500}, ""msg"": ""Command does not support JSON output""} failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) => {""command"": ""show vlan | json"", ""failed"": true, ""item"": {""name"": ""clbv2_vm_snet"", ""state"": ""present"", ""vlan_id"": 600}, ""msg"": ""Command does not support JSON output""} changed: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) to retry, use: --limit @vlan_test.retry PLAY RECAP ********************************************************************* 10.127.49.31 : ok=1 changed=0 unreachable=0 failed=1 10.127.49.32 : ok=1 changed=0 unreachable=0 failed=1 vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab PLAY [all] ********************************************************************* TASK [vlan : Include nxos vlan tasks] ****************************************** included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32 TASK [vlan : Vlan configuration] *********************************************** ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) PLAY RECAP ********************************************************************* 10.127.49.31 : ok=2 changed=0 unreachable=0 failed=0 10.127.49.32 : ok=2 changed=0 unreachable=0 failed=0 ``` ",1,nxos vlan returns command does not support json output issue type bug report component name nxos vlan ansible version vagrant vagrant ansible version ansible config file vagrant ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible cfg ask pass false gathering explicit roles path vagrant roles os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running from vagrant managing cisco nexus chassis system version summary nxos vlan module returns command does not support json output however the vlans are added to the device steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used example role name vlan configuration nxos vlan admin state item admin state default omit host inventory hostname name item name default omit password cisco nexus password port item port default omit provider provider default omit ssh keyfile ssh keyfile default omit state item state default omit transport transport default cli use ssl use ssl default omit username cisco nexus username vlan id item vlan id default omit vlan range item vlan range default omit vlan state item vlan state default omit with items vlans example group vars vlans vlan id name vm mgmt state present vlan id name vm snet state present expected results i expected the vlans to be added and the module to return changed subsequent runs return ok actual results vagrant vagrant ansible playbook vlan test yml i inventory lab play task included vagrant roles vlan tasks nxos yml for task failed item u state u present u name u vm mgmt u vlan id command show vlan json failed true item name vm mgmt state present vlan id msg command does not support json output failed item u state u present u name u vm mgmt u vlan id command show vlan json failed true item name vm mgmt state present vlan id msg command does not support json output failed item u state u present u name u vm snet u vlan id command show vlan json failed true item name vm snet state present vlan id msg command does not support json output changed item u state u present u name u vm snet u vlan id to retry use limit vlan test retry play recap ok changed unreachable failed ok changed unreachable failed vagrant vagrant ansible playbook vlan test yml i inventory lab play task included vagrant roles vlan tasks nxos yml for task ok item u state u present u name u vm mgmt u vlan id ok item u state u present u name u vm mgmt u vlan id ok item u state u present u name u vm snet u vlan id ok item u state u present u name u vm snet u vlan id play recap ok changed unreachable failed ok changed unreachable failed ,1 1002,4770777155.0,IssuesEvent,2016-10-26 16:05:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"When copying multiple files in a dir, diff is ignored",affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OS X 10.11.5 (15F34) ##### SUMMARY When using the ""if path ends with /, only inside contents of that directory are copied to destination"" functionality --diff is ignored. ##### STEPS TO REPRODUCE ``` - name: ensure php-fpm configs match version control copy: src=roles/phpfpm/files/phpfpm/ dest=/etc/php-fpm.d/ owner=root group=root mode=0644 backup=yes ansible-playbook test.yml --check --diff ``` ##### EXPECTED RESULTS As below but with diff results, eg: --- before: /etc/php-fpm.conf +++ after: /Users/tobin/ansible/roles/phpfpm/files/phpfpm/blah.conf @@ -57,4 +57,3 @@ ;;;;;;;;;;;;;;;;;;;; ; Blah (Unrelated: why does it always get upset about trailing lines?) - ##### ACTUAL RESULTS ``` TASK [phpfpm : ensure php-fpm configs match version control] ************* task path: /Users/tobin/src/ansible/roles/phpfpm/tasks/main.yml:81 ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/tobin/.ansible/cp/ansible-ssh-%h-%p-%r rs.dc1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466608402.77-139333405869702 `"" && echo ansible-tmp-1466608402.77-139333405869702=""` echo $HOME/.ansible/tmp/ansible-tmp-1466608402.77-139333405869702 `"" ) && sleep 0'""'""'' PUT /var/folders/66/_3c8xw811gndqst9td3rcmn80000gn/T/tmpWaafjg TO /home/tobin/.ansible/tmp/ansible-tmp-1466608402.77-139333405869702/stat SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/tobin/.ansible/cp/ansible-ssh-%h-%p-%r '[rs.dc1]' ...REPEAT FOR EACH FILE... ok: [rs.dc1] => {""changed"": false, ""dest"": ""/etc/php-fpm.d/"", ""invocation"": {""module_args"": {""backup"": ""yes"", ""dest"": ""/etc/php-fpm.d/"", ""group"": ""root"", ""mode"": ""0644"", ""owner"": ""root"", ""src"": ""roles/phpfpm/files/phpfpm/""}, ""module_name"": ""copy""}, ""src"": ""/Users/tobin/src/ansible/roles/phpfpm/files/phpfpm""} ``` ",True,"When copying multiple files in a dir, diff is ignored - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OS X 10.11.5 (15F34) ##### SUMMARY When using the ""if path ends with /, only inside contents of that directory are copied to destination"" functionality --diff is ignored. ##### STEPS TO REPRODUCE ``` - name: ensure php-fpm configs match version control copy: src=roles/phpfpm/files/phpfpm/ dest=/etc/php-fpm.d/ owner=root group=root mode=0644 backup=yes ansible-playbook test.yml --check --diff ``` ##### EXPECTED RESULTS As below but with diff results, eg: --- before: /etc/php-fpm.conf +++ after: /Users/tobin/ansible/roles/phpfpm/files/phpfpm/blah.conf @@ -57,4 +57,3 @@ ;;;;;;;;;;;;;;;;;;;; ; Blah (Unrelated: why does it always get upset about trailing lines?) - ##### ACTUAL RESULTS ``` TASK [phpfpm : ensure php-fpm configs match version control] ************* task path: /Users/tobin/src/ansible/roles/phpfpm/tasks/main.yml:81 ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/tobin/.ansible/cp/ansible-ssh-%h-%p-%r rs.dc1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466608402.77-139333405869702 `"" && echo ansible-tmp-1466608402.77-139333405869702=""` echo $HOME/.ansible/tmp/ansible-tmp-1466608402.77-139333405869702 `"" ) && sleep 0'""'""'' PUT /var/folders/66/_3c8xw811gndqst9td3rcmn80000gn/T/tmpWaafjg TO /home/tobin/.ansible/tmp/ansible-tmp-1466608402.77-139333405869702/stat SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/tobin/.ansible/cp/ansible-ssh-%h-%p-%r '[rs.dc1]' ...REPEAT FOR EACH FILE... ok: [rs.dc1] => {""changed"": false, ""dest"": ""/etc/php-fpm.d/"", ""invocation"": {""module_args"": {""backup"": ""yes"", ""dest"": ""/etc/php-fpm.d/"", ""group"": ""root"", ""mode"": ""0644"", ""owner"": ""root"", ""src"": ""roles/phpfpm/files/phpfpm/""}, ""module_name"": ""copy""}, ""src"": ""/Users/tobin/src/ansible/roles/phpfpm/files/phpfpm""} ``` ",1,when copying multiple files in a dir diff is ignored issue type bug report component name copy module ansible version ansible config file configured module search path default w o overrides configuration os environment os x summary when using the if path ends with only inside contents of that directory are copied to destination functionality diff is ignored steps to reproduce name ensure php fpm configs match version control copy src roles phpfpm files phpfpm dest etc php fpm d owner root group root mode backup yes ansible playbook test yml check diff expected results as below but with diff results eg before etc php fpm conf after users tobin ansible roles phpfpm files phpfpm blah conf blah unrelated why does it always get upset about trailing lines actual results task task path users tobin src ansible roles phpfpm tasks main yml establish ssh connection for user none ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users tobin ansible cp ansible ssh h p r rs bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpwaafjg to home tobin ansible tmp ansible tmp stat ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users tobin ansible cp ansible ssh h p r repeat for each file ok changed false dest etc php fpm d invocation module args backup yes dest etc php fpm d group root mode owner root src roles phpfpm files phpfpm module name copy src users tobin src ansible roles phpfpm files phpfpm ,1 803,4423286535.0,IssuesEvent,2016-08-16 07:58:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ansible 2.1.0, s3 module bug",aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME s3 module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Only default settings ##### OS / ENVIRONMENT I think N/A. But we use Ubuntu: ``` NAME=""Ubuntu"" VERSION=""14.04.4 LTS, Trusty Tahr"" ID=ubuntu ID_LIKE=debian PRETTY_NAME=""Ubuntu 14.04.4 LTS"" VERSION_ID=""14.04"" HOME_URL=""http://www.ubuntu.com/"" SUPPORT_URL=""http://help.ubuntu.com/"" BUG_REPORT_URL=""http://bugs.launchpad.net/ubuntu/"" ``` ##### SUMMARY Error during using s3 module. For example, during get file from s3 bucket. ##### STEPS TO REPRODUCE Run playbook ``` --- - name: Test playbook hosts: localhost tasks: - name: Get file from S3 s3: bucket= object=/path/to/file> dest=/tmp/file mode=get ``` ##### EXPECTED RESULTS File to /tmp directory ##### ACTUAL RESULTS Error during ansible task running. ``` TASK [Get file from S3] ************************************************ fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failed to connect to S3: Region does not seem to be available for aws module boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path""} ```",True,"ansible 2.1.0, s3 module bug - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME s3 module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Only default settings ##### OS / ENVIRONMENT I think N/A. But we use Ubuntu: ``` NAME=""Ubuntu"" VERSION=""14.04.4 LTS, Trusty Tahr"" ID=ubuntu ID_LIKE=debian PRETTY_NAME=""Ubuntu 14.04.4 LTS"" VERSION_ID=""14.04"" HOME_URL=""http://www.ubuntu.com/"" SUPPORT_URL=""http://help.ubuntu.com/"" BUG_REPORT_URL=""http://bugs.launchpad.net/ubuntu/"" ``` ##### SUMMARY Error during using s3 module. For example, during get file from s3 bucket. ##### STEPS TO REPRODUCE Run playbook ``` --- - name: Test playbook hosts: localhost tasks: - name: Get file from S3 s3: bucket= object=/path/to/file> dest=/tmp/file mode=get ``` ##### EXPECTED RESULTS File to /tmp directory ##### ACTUAL RESULTS Error during ansible task running. ``` TASK [Get file from S3] ************************************************ fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failed to connect to S3: Region does not seem to be available for aws module boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path""} ```",1,ansible module bug issue type bug report component name module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration only default settings os environment i think n a but we use ubuntu name ubuntu version lts trusty tahr id ubuntu id like debian pretty name ubuntu lts version id home url support url bug report url summary error during using module for example during get file from bucket steps to reproduce run playbook name test playbook hosts localhost tasks name get file from bucket object path to file dest tmp file mode get expected results file to tmp directory actual results error during ansible task running task fatal failed changed false failed true msg failed to connect to region does not seem to be available for aws module boto if the region definitely exists you may need to upgrade boto or extend with endpoints path ,1 870,4536511414.0,IssuesEvent,2016-09-08 20:39:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Reseting all locale environment variables causes checkout / update fail on repos that contain unicode filenames.,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Ubuntu 15.10 or 14.04 ##### SUMMARY Reseting all locale environment variables causes checkout / update failure on repos that contain unicode filenames. This bad behavior was introduced with commit 7020c8dcbeb6d08761c014730ee61558793ac00f fixing the issue ""source_control/subversion.py needs to reset LC_MESSAGES #3255"" ##### STEPS TO REPRODUCE ``` test.yml: --- - hosts: localhost tasks: - subversion: repo=""https://subversion.assembla.com/svn/test-utf8-files/"" dest=""test-utf8-files"" ansible-playbook test.yml ``` ##### EXPECTED RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/svn --non-interactive --trust-server-cert --no-auth-cache checkout -r HEAD https://subversion.assembla.com/svn/test-utf8-files/ test-utf8-files"", ""failed"": true, ""msg"": ""svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt"", ""rc"": 1, ""stderr"": ""svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\n"", ""stdout"": ""A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\nA test-utf8-files/branches\n"", ""stdout_lines"": [""A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt"", ""A test-utf8-files/branches""]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ",True,"Reseting all locale environment variables causes checkout / update fail on repos that contain unicode filenames. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Ubuntu 15.10 or 14.04 ##### SUMMARY Reseting all locale environment variables causes checkout / update failure on repos that contain unicode filenames. This bad behavior was introduced with commit 7020c8dcbeb6d08761c014730ee61558793ac00f fixing the issue ""source_control/subversion.py needs to reset LC_MESSAGES #3255"" ##### STEPS TO REPRODUCE ``` test.yml: --- - hosts: localhost tasks: - subversion: repo=""https://subversion.assembla.com/svn/test-utf8-files/"" dest=""test-utf8-files"" ansible-playbook test.yml ``` ##### EXPECTED RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/svn --non-interactive --trust-server-cert --no-auth-cache checkout -r HEAD https://subversion.assembla.com/svn/test-utf8-files/ test-utf8-files"", ""failed"": true, ""msg"": ""svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt"", ""rc"": 1, ""stderr"": ""svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\n"", ""stdout"": ""A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\nA test-utf8-files/branches\n"", ""stdout_lines"": [""A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt"", ""A test-utf8-files/branches""]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ",1,reseting all locale environment variables causes checkout update fail on repos that contain unicode filenames issue type bug report component name subversion ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default os environment ubuntu or summary reseting all locale environment variables causes checkout update failure on repos that contain unicode filenames this bad behavior was introduced with commit fixing the issue source control subversion py needs to reset lc messages steps to reproduce test yml hosts localhost tasks subversion repo dest test files ansible playbook test yml expected results ansible playbook test yml play task ok task changed play recap localhost ok changed unreachable failed actual results ansible playbook test yml play task ok task fatal failed changed false cmd usr bin svn non interactive trust server cert no auth cache checkout r head test files failed true msg svn failed to run the wc db work queue associated with home mullnerz test ansible subversion test test files branches work item file install txt nsvn can t convert string from utf to native encoding nsvn home mullnerz test ansible subversion test test files txt rc stderr svn failed to run the wc db work queue associated with home mullnerz test ansible subversion test test files branches work item file install txt nsvn can t convert string from utf to native encoding nsvn home mullnerz test ansible subversion test test files txt n stdout a test files txt na test files branches n stdout lines no more hosts left to retry use limit test retry play recap localhost ok changed unreachable failed ,1 1819,6577323799.0,IssuesEvent,2017-09-12 00:06:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment,affects_2.1 azure bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_virtualmachine ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION NO Default ##### OS / ENVIRONMENT Ubuntu 14 /Linux ##### SUMMARY Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment ##### STEPS TO REPRODUCE Create a RHEL VM with ssh key authentication, and run below playbook to stop created VM but we get error ``` --- - hosts: localhost connection: local gather_facts: no tasks: - name: Power Off and On azure_rm_virtualmachine: resource_group: RG-APP name: ""{{ vmName }}"" started: False ``` ##### EXPECTED RESULTS VM to be stopped ##### ACTUAL RESULTS ``` cd3ans@precise64:~/ansibleplay$ ansible-playbook -vvv stopvm.yml --extra-vars ""vmName=POCWEBD001"" Using /etc/ansible/ansible.cfg as config file [WARNING]: provided hosts list is empty, only localhost is available PLAYBOOK: stopvm.yml *********************************************************** 1 plays in stopvm.yml PLAY [localhost] *************************************************************** TASK [Power Off and On] ******************************************************** task path: /home/cd3ans/ansibleplay/stopvm.yml:6 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: cd3ans <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `"" && echo ansible-tmp-1466790398.4-178591174693281=""` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmp3ysraT TO /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine; rm -rf ""/home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""ad_user"": null, ""admin_password"": null, ""admin_username"": null, ""allocated"": false, ""append_tags"": true, ""client_id"": null, ""image"": null, ""location"": null, ""name"": ""POCWEBD001"", ""network_interface_names"": null, ""open_ports"": null, ""os_disk_caching"": ""ReadOnly"", ""os_type"": ""Linux"", ""password"": null, ""profile"": null, ""public_ip_allocation_method"": ""Static"", ""remove_on_absent"": [""all""], ""resource_group"": ""RG-APP"", ""restarted"": false, ""secret"": null, ""short_hostname"": null, ""ssh_password_enabled"": true, ""ssh_public_keys"": null, ""started"": false, ""state"": ""present"", ""storage_account_name"": null, ""storage_blob_name"": null, ""storage_container_name"": ""vhds"", ""subnet_name"": null, ""subscription_id"": null, ""tags"": null, ""tenant"": null, ""virtual_network_name"": null, ""vm_size"": ""Standard_D1""}, ""module_name"": ""azure_rm_virtualmachine""}, ""msg"": ""Error creating or updating virtual machinePOCWEBD001 - Changing property 'linuxConfiguration.ssh.publicKeys' is not allowed.""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @stopvm.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_virtualmachine ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION NO Default ##### OS / ENVIRONMENT Ubuntu 14 /Linux ##### SUMMARY Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment ##### STEPS TO REPRODUCE Create a RHEL VM with ssh key authentication, and run below playbook to stop created VM but we get error ``` --- - hosts: localhost connection: local gather_facts: no tasks: - name: Power Off and On azure_rm_virtualmachine: resource_group: RG-APP name: ""{{ vmName }}"" started: False ``` ##### EXPECTED RESULTS VM to be stopped ##### ACTUAL RESULTS ``` cd3ans@precise64:~/ansibleplay$ ansible-playbook -vvv stopvm.yml --extra-vars ""vmName=POCWEBD001"" Using /etc/ansible/ansible.cfg as config file [WARNING]: provided hosts list is empty, only localhost is available PLAYBOOK: stopvm.yml *********************************************************** 1 plays in stopvm.yml PLAY [localhost] *************************************************************** TASK [Power Off and On] ******************************************************** task path: /home/cd3ans/ansibleplay/stopvm.yml:6 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: cd3ans <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `"" && echo ansible-tmp-1466790398.4-178591174693281=""` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmp3ysraT TO /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine; rm -rf ""/home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""ad_user"": null, ""admin_password"": null, ""admin_username"": null, ""allocated"": false, ""append_tags"": true, ""client_id"": null, ""image"": null, ""location"": null, ""name"": ""POCWEBD001"", ""network_interface_names"": null, ""open_ports"": null, ""os_disk_caching"": ""ReadOnly"", ""os_type"": ""Linux"", ""password"": null, ""profile"": null, ""public_ip_allocation_method"": ""Static"", ""remove_on_absent"": [""all""], ""resource_group"": ""RG-APP"", ""restarted"": false, ""secret"": null, ""short_hostname"": null, ""ssh_password_enabled"": true, ""ssh_public_keys"": null, ""started"": false, ""state"": ""present"", ""storage_account_name"": null, ""storage_blob_name"": null, ""storage_container_name"": ""vhds"", ""subnet_name"": null, ""subscription_id"": null, ""tags"": null, ""tenant"": null, ""virtual_network_name"": null, ""vm_size"": ""Standard_D1""}, ""module_name"": ""azure_rm_virtualmachine""}, ""msg"": ""Error creating or updating virtual machinePOCWEBD001 - Changing property 'linuxConfiguration.ssh.publicKeys' is not allowed.""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @stopvm.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,unable to stop vm using azure rm virtualmachine with ssh key for rhel vm s created in azure or using azure rm deployment issue type bug report component name azure rm virtualmachine ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu linux summary unable to stop vm using azure rm virtualmachine with ssh key for rhel vm s created in azure or using azure rm deployment steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create a rhel vm with ssh key authentication and run below playbook to stop created vm but we get error hosts localhost connection local gather facts no tasks name power off and on azure rm virtualmachine resource group rg app name vmname started false expected results vm to be stopped actual results ansibleplay ansible playbook vvv stopvm yml extra vars vmname using etc ansible ansible cfg as config file provided hosts list is empty only localhost is available playbook stopvm yml plays in stopvm yml play task task path home ansibleplay stopvm yml establish local connection for user exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp azure rm virtualmachine rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args ad user null admin password null admin username null allocated false append tags true client id null image null location null name network interface names null open ports null os disk caching readonly os type linux password null profile null public ip allocation method static remove on absent resource group rg app restarted false secret null short hostname null ssh password enabled true ssh public keys null started false state present storage account name null storage blob name null storage container name vhds subnet name null subscription id null tags null tenant null virtual network name null vm size standard module name azure rm virtualmachine msg error creating or updating virtual changing property linuxconfiguration ssh publickeys is not allowed no more hosts left to retry use limit stopvm retry play recap localhost ok changed unreachable failed ,1 1839,6577373851.0,IssuesEvent,2017-09-12 00:27:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Pulling repository when using net option in docker module with nonexistent network,affects_2.0 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Ansible is pulling repository when using net option in docker module, even if the option pull: missing is set and the image is allready on the system. In our case there was also an error message regarding the repo pull (private repo): unauthorized: authentication required This occurs when the network is not created before using the net option. ##### STEPS TO REPRODUCE ``` - name: some container docker: name: ""some container"" image: myhost/image pull: missing restart_policy: always env: ES_HEAP_SIZE: 4g memory_limit: '8192MB' net: 'mynetwork' state: started ``` ##### EXPECTED RESULTS There should be an error message stating, that the network is not existing. Its very hard for users to identify the problem, as there is no correct error message and the message regarding image pull is misleading. After creating the network with docker network create mynetwork the container is started without pull. ##### ACTUAL RESULTS ``` ``` ",True,"Pulling repository when using net option in docker module with nonexistent network - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Ansible is pulling repository when using net option in docker module, even if the option pull: missing is set and the image is allready on the system. In our case there was also an error message regarding the repo pull (private repo): unauthorized: authentication required This occurs when the network is not created before using the net option. ##### STEPS TO REPRODUCE ``` - name: some container docker: name: ""some container"" image: myhost/image pull: missing restart_policy: always env: ES_HEAP_SIZE: 4g memory_limit: '8192MB' net: 'mynetwork' state: started ``` ##### EXPECTED RESULTS There should be an error message stating, that the network is not existing. Its very hard for users to identify the problem, as there is no correct error message and the message regarding image pull is misleading. After creating the network with docker network create mynetwork the container is started without pull. ##### ACTUAL RESULTS ``` ``` ",1,pulling repository when using net option in docker module with nonexistent network issue type bug report component name docker ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary ansible is pulling repository when using net option in docker module even if the option pull missing is set and the image is allready on the system in our case there was also an error message regarding the repo pull private repo unauthorized authentication required this occurs when the network is not created before using the net option steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name some container docker name some container image myhost image pull missing restart policy always env es heap size memory limit net mynetwork state started expected results there should be an error message stating that the network is not existing its very hard for users to identify the problem as there is no correct error message and the message regarding image pull is misleading after creating the network with docker network create mynetwork the container is started without pull actual results ,1 1741,6574889313.0,IssuesEvent,2017-09-11 14:24:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Check if service is installed just with conditionals,affects_2.2 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/20 17:00:41 (GMT +000) ``` ##### SUMMARY When I want to ensure, that a certain program is NOT running, I use the `service` module and set the parameter `state: stopped`. This works perfectly fine, when the service is really installed. But in many cases, the service isn't installed and than this module just fails. This is really annoying, because a task must be added before the service task, which checks somehow if the service is even installed. It would be very helpful, to somehow extend the service-module to stop a service only if it is available. In my point of view the most generic solution would be to add some Utility usable in the when block: ``` - service name: topbeat state: stopped when: services.state.topbeat is present ``` ",True,"Check if service is installed just with conditionals - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/20 17:00:41 (GMT +000) ``` ##### SUMMARY When I want to ensure, that a certain program is NOT running, I use the `service` module and set the parameter `state: stopped`. This works perfectly fine, when the service is really installed. But in many cases, the service isn't installed and than this module just fails. This is really annoying, because a task must be added before the service task, which checks somehow if the service is even installed. It would be very helpful, to somehow extend the service-module to stop a service only if it is available. In my point of view the most generic solution would be to add some Utility usable in the when block: ``` - service name: topbeat state: stopped when: services.state.topbeat is present ``` ",1,check if service is installed just with conditionals issue type feature idea component name service ansible version ansible detached head last updated gmt summary when i want to ensure that a certain program is not running i use the service module and set the parameter state stopped this works perfectly fine when the service is really installed but in many cases the service isn t installed and than this module just fails this is really annoying because a task must be added before the service task which checks somehow if the service is even installed it would be very helpful to somehow extend the service module to stop a service only if it is available in my point of view the most generic solution would be to add some utility usable in the when block service name topbeat state stopped when services state topbeat is present ,1 1875,6577504564.0,IssuesEvent,2017-09-12 01:22:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Add Target proxy, URL map and SSL certificate resource management to GCE",affects_2.3 cloud feature_idea gce waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME GCE ##### ANSIBLE VERSION N/A ##### SUMMARY Currently it is not possible to manage target proxies, their url map and ssl certificate resources with GCE modules. ##### EXPECTED RESULTS I would like to provision a bunch of webservers, put them under load balancer and set up multiple SSL hosts that will be accepted by LB, end outer SSL connection and forward requests to mapped webservers. ##### ACTUAL RESULTS This is currently possible only by invoking the gcutil directly with command/shell. More info: https://cloud.google.com/compute/docs/load-balancing/http/target-proxies ",True,"Add Target proxy, URL map and SSL certificate resource management to GCE - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME GCE ##### ANSIBLE VERSION N/A ##### SUMMARY Currently it is not possible to manage target proxies, their url map and ssl certificate resources with GCE modules. ##### EXPECTED RESULTS I would like to provision a bunch of webservers, put them under load balancer and set up multiple SSL hosts that will be accepted by LB, end outer SSL connection and forward requests to mapped webservers. ##### ACTUAL RESULTS This is currently possible only by invoking the gcutil directly with command/shell. More info: https://cloud.google.com/compute/docs/load-balancing/http/target-proxies ",1,add target proxy url map and ssl certificate resource management to gce issue type feature idea component name gce ansible version n a summary currently it is not possible to manage target proxies their url map and ssl certificate resources with gce modules expected results i would like to provision a bunch of webservers put them under load balancer and set up multiple ssl hosts that will be accepted by lb end outer ssl connection and forward requests to mapped webservers actual results this is currently possible only by invoking the gcutil directly with command shell more info ,1 1826,6577335615.0,IssuesEvent,2017-09-12 00:11:35,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,lineinfile with regexp writes line when there isn't a match,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Bone stock ##### OS / ENVIRONMENT CentOS 7 ##### SUMMARY When lineinfile with regexp finds a match, it substitutes properly. When it doesn't find a match (say, on subsequent runs), it just dumps the specified line at the bottom of the file as if you hadn't specified a regexp. Needless to say, this breaks idempotence. ##### STEPS TO REPRODUCE 1. Take a look at a file. 2. Run lineinfile with regexp that matches a line 3. See that your line was in fact replaced. 4. Run lineinfile again. 5. See that the specified replacement line is now duplicated at the bottom of the file. ``` - name: Add ops scripts to sudo secure_path lineinfile: dest: /etc/sudoers regexp: > ^Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin$ line: 'Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin' validate: visudo -cf %s ``` ##### EXPECTED RESULTS Idempotent line replacement after a second run ``` # Refuse to run if unable to disable echo on the tty. Defaults !visiblepw # # Preserving HOME has security implications since many programs # use it when searching for configuration files. Note that HOME # is already set when the the env_reset option is enabled, so # this option is only effective for configurations where either # env_reset is disabled or HOME is present in the env_keep list. # Defaults always_set_home Defaults env_reset Defaults env_keep = ""COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"" Defaults env_keep += ""MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"" Defaults env_keep += ""LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"" Defaults env_keep += ""LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"" Defaults env_keep += ""LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"" # # Adding HOME to env_keep may enable a user to run unrestricted # commands via sudo. # # Defaults env_keep += ""HOME"" Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems). ## Syntax: ## ## user MACHINE=COMMANDS ## ## The COMMANDS section may have other options added to it. ## ## Allow root to run any commands anywhere root ALL=(ALL) ALL ## Allows people in group wheel to run all commands without a password %wheel ALL=(ALL) NOPASSWD: ALL ``` ##### ACTUAL RESULTS Potentially show-stopping garbage at the bottom of a file ``` # Refuse to run if unable to disable echo on the tty. Defaults !visiblepw # # Preserving HOME has security implications since many programs # use it when searching for configuration files. Note that HOME # is already set when the the env_reset option is enabled, so # this option is only effective for configurations where either # env_reset is disabled or HOME is present in the env_keep list. # Defaults always_set_home Defaults env_reset Defaults env_keep = ""COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"" Defaults env_keep += ""MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"" Defaults env_keep += ""LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"" Defaults env_keep += ""LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"" Defaults env_keep += ""LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"" # # Adding HOME to env_keep may enable a user to run unrestricted # commands via sudo. # # Defaults env_keep += ""HOME"" Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems). ## Syntax: ## ## user MACHINE=COMMANDS ## ## The COMMANDS section may have other options added to it. ## ## Allow root to run any commands anywhere root ALL=(ALL) ALL ## Allows people in group wheel to run all commands without a password %wheel ALL=(ALL) NOPASSWD: ALL Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin ``` ",True,"lineinfile with regexp writes line when there isn't a match - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Bone stock ##### OS / ENVIRONMENT CentOS 7 ##### SUMMARY When lineinfile with regexp finds a match, it substitutes properly. When it doesn't find a match (say, on subsequent runs), it just dumps the specified line at the bottom of the file as if you hadn't specified a regexp. Needless to say, this breaks idempotence. ##### STEPS TO REPRODUCE 1. Take a look at a file. 2. Run lineinfile with regexp that matches a line 3. See that your line was in fact replaced. 4. Run lineinfile again. 5. See that the specified replacement line is now duplicated at the bottom of the file. ``` - name: Add ops scripts to sudo secure_path lineinfile: dest: /etc/sudoers regexp: > ^Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin$ line: 'Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin' validate: visudo -cf %s ``` ##### EXPECTED RESULTS Idempotent line replacement after a second run ``` # Refuse to run if unable to disable echo on the tty. Defaults !visiblepw # # Preserving HOME has security implications since many programs # use it when searching for configuration files. Note that HOME # is already set when the the env_reset option is enabled, so # this option is only effective for configurations where either # env_reset is disabled or HOME is present in the env_keep list. # Defaults always_set_home Defaults env_reset Defaults env_keep = ""COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"" Defaults env_keep += ""MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"" Defaults env_keep += ""LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"" Defaults env_keep += ""LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"" Defaults env_keep += ""LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"" # # Adding HOME to env_keep may enable a user to run unrestricted # commands via sudo. # # Defaults env_keep += ""HOME"" Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems). ## Syntax: ## ## user MACHINE=COMMANDS ## ## The COMMANDS section may have other options added to it. ## ## Allow root to run any commands anywhere root ALL=(ALL) ALL ## Allows people in group wheel to run all commands without a password %wheel ALL=(ALL) NOPASSWD: ALL ``` ##### ACTUAL RESULTS Potentially show-stopping garbage at the bottom of a file ``` # Refuse to run if unable to disable echo on the tty. Defaults !visiblepw # # Preserving HOME has security implications since many programs # use it when searching for configuration files. Note that HOME # is already set when the the env_reset option is enabled, so # this option is only effective for configurations where either # env_reset is disabled or HOME is present in the env_keep list. # Defaults always_set_home Defaults env_reset Defaults env_keep = ""COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"" Defaults env_keep += ""MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"" Defaults env_keep += ""LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"" Defaults env_keep += ""LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"" Defaults env_keep += ""LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"" # # Adding HOME to env_keep may enable a user to run unrestricted # commands via sudo. # # Defaults env_keep += ""HOME"" Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems). ## Syntax: ## ## user MACHINE=COMMANDS ## ## The COMMANDS section may have other options added to it. ## ## Allow root to run any commands anywhere root ALL=(ALL) ALL ## Allows people in group wheel to run all commands without a password %wheel ALL=(ALL) NOPASSWD: ALL Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin ``` ",1,lineinfile with regexp writes line when there isn t a match issue type bug report component name lineinfile ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration bone stock os environment centos summary when lineinfile with regexp finds a match it substitutes properly when it doesn t find a match say on subsequent runs it just dumps the specified line at the bottom of the file as if you hadn t specified a regexp needless to say this breaks idempotence steps to reproduce take a look at a file run lineinfile with regexp that matches a line see that your line was in fact replaced run lineinfile again see that the specified replacement line is now duplicated at the bottom of the file name add ops scripts to sudo secure path lineinfile dest etc sudoers regexp defaults secure path sbin bin usr sbin usr bin line defaults secure path opt bin sbin bin usr sbin usr bin validate visudo cf s expected results idempotent line replacement after a second run refuse to run if unable to disable echo on the tty defaults visiblepw preserving home has security implications since many programs use it when searching for configuration files note that home is already set when the the env reset option is enabled so this option is only effective for configurations where either env reset is disabled or home is present in the env keep list defaults always set home defaults env reset defaults env keep colors display hostname histsize inputrc kdedir ls colors defaults env keep mail qtdir username lang lc address lc ctype defaults env keep lc collate lc identification lc measurement lc messages defaults env keep lc monetary lc name lc numeric lc paper lc telephone defaults env keep lc time lc all language linguas xkb charset xauthority adding home to env keep may enable a user to run unrestricted commands via sudo defaults env keep home defaults secure path opt bin sbin bin usr sbin usr bin next comes the main part which users can run what software on which machines the sudoers file can be shared between multiple systems syntax user machine commands the commands section may have other options added to it allow root to run any commands anywhere root all all all allows people in group wheel to run all commands without a password wheel all all nopasswd all actual results potentially show stopping garbage at the bottom of a file refuse to run if unable to disable echo on the tty defaults visiblepw preserving home has security implications since many programs use it when searching for configuration files note that home is already set when the the env reset option is enabled so this option is only effective for configurations where either env reset is disabled or home is present in the env keep list defaults always set home defaults env reset defaults env keep colors display hostname histsize inputrc kdedir ls colors defaults env keep mail qtdir username lang lc address lc ctype defaults env keep lc collate lc identification lc measurement lc messages defaults env keep lc monetary lc name lc numeric lc paper lc telephone defaults env keep lc time lc all language linguas xkb charset xauthority adding home to env keep may enable a user to run unrestricted commands via sudo defaults env keep home defaults secure path opt bin sbin bin usr sbin usr bin next comes the main part which users can run what software on which machines the sudoers file can be shared between multiple systems syntax user machine commands the commands section may have other options added to it allow root to run any commands anywhere root all all all allows people in group wheel to run all commands without a password wheel all all nopasswd all defaults secure path opt bin sbin bin usr sbin usr bin ,1 1911,6577573138.0,IssuesEvent,2017-09-12 01:51:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,rax module fails to create Cloud Server with multiple Cloud Images with same human_id,affects_1.9 bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: `rax` ##### Ansible Version: ``` ansible 1.9.3 configured module search path = None ``` This includes vendored `pyrax==1.9.7`. ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: The rax module supports the image key to be either `id` (eg UUID) or the `human_id`. The Cloud Images API v2 supports images to be created with the same `human_id`. However when there are greater than one `human_id` in an images list, the `rax` module fails a create. ##### Steps To Reproduce: - Create Cloud Server - Create Cloud Image of Cloud Server - Call `rax` module to create additional Cloud Server from `human_id` image of above Cloud Image. You should succeed. - Create Cloud Image of same Cloud Server reusing the same `human_id`. - Call `rax` module to create additional Cloud Server from `human_id` image of above Cloud Image. You fail with `No matching image found (${IMAGE_NAME})` - Delete Cloud Image created from the same `human_id`, leaving behind one Cloud Image of the given `human_id`. - Call `rax` module to create Cloud Server from `human_id` image of above Cloud Image. You should succeed. ##### Expected Results: You should succeed with the second Cloud Server build. ##### Actual Results: You fail with `No matching image found (${IMAGE_NAME})`. ",True,"rax module fails to create Cloud Server with multiple Cloud Images with same human_id - ##### Issue Type: - Bug Report ##### Plugin Name: `rax` ##### Ansible Version: ``` ansible 1.9.3 configured module search path = None ``` This includes vendored `pyrax==1.9.7`. ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: The rax module supports the image key to be either `id` (eg UUID) or the `human_id`. The Cloud Images API v2 supports images to be created with the same `human_id`. However when there are greater than one `human_id` in an images list, the `rax` module fails a create. ##### Steps To Reproduce: - Create Cloud Server - Create Cloud Image of Cloud Server - Call `rax` module to create additional Cloud Server from `human_id` image of above Cloud Image. You should succeed. - Create Cloud Image of same Cloud Server reusing the same `human_id`. - Call `rax` module to create additional Cloud Server from `human_id` image of above Cloud Image. You fail with `No matching image found (${IMAGE_NAME})` - Delete Cloud Image created from the same `human_id`, leaving behind one Cloud Image of the given `human_id`. - Call `rax` module to create Cloud Server from `human_id` image of above Cloud Image. You should succeed. ##### Expected Results: You should succeed with the second Cloud Server build. ##### Actual Results: You fail with `No matching image found (${IMAGE_NAME})`. ",1,rax module fails to create cloud server with multiple cloud images with same human id issue type bug report plugin name rax ansible version ansible configured module search path none this includes vendored pyrax ansible configuration n a environment n a summary the rax module supports the image key to be either id eg uuid or the human id the cloud images api supports images to be created with the same human id however when there are greater than one human id in an images list the rax module fails a create steps to reproduce create cloud server create cloud image of cloud server call rax module to create additional cloud server from human id image of above cloud image you should succeed create cloud image of same cloud server reusing the same human id call rax module to create additional cloud server from human id image of above cloud image you fail with no matching image found image name delete cloud image created from the same human id leaving behind one cloud image of the given human id call rax module to create cloud server from human id image of above cloud image you should succeed expected results you should succeed with the second cloud server build actual results you fail with no matching image found image name ,1 1702,6574397126.0,IssuesEvent,2017-09-11 12:44:35,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt module always returns CHANGED when using update_cache,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Debian Jessie 64 GNU/Linux ##### SUMMARY When installing packages and using update_cache option apt module always returns state CHANGED even if packages are already installed. In my opinion expected behavior would be: 1. when using update_cache AND installing package(s) return status CHANGED only when package has been installed or updated, no matter if cache has been updated or not. 2. when using update_cache without installing any package(s) return status CHANGED when cache has been updated. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost gather_facts: no become: yes tasks: - apt: name: htop update_cache: yes ``` ##### EXPECTED RESULTS As in ansible 2.1: ``` PLAY [localhost] *************************************************************** TASK [apt] ********************************************************************* ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 ``` Part of debug output: ``` ok: [localhost] => {""cache_update_time"": 1478621386, ""cache_updated"": true, ""changed"": false, ""invocation"": {""module_args"": {""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": null, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""htop"", ""only_upgrade"": false, ""package"": [""htop""], ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null}, ""module_name"": ""apt""}} ``` ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [apt] ********************************************************************* changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=1 unreachable=0 failed=0 ``` Part of debug output: ``` changed: [localhost] => { ""cache_update_time"": 1478621193, ""cache_updated"": true, ""changed"": true, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 0, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""htop"", ""only_upgrade"": false, ""package"": [ ""htop"" ], ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null }, ""module_name"": ""apt"" } } ```",True,"apt module always returns CHANGED when using update_cache - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Debian Jessie 64 GNU/Linux ##### SUMMARY When installing packages and using update_cache option apt module always returns state CHANGED even if packages are already installed. In my opinion expected behavior would be: 1. when using update_cache AND installing package(s) return status CHANGED only when package has been installed or updated, no matter if cache has been updated or not. 2. when using update_cache without installing any package(s) return status CHANGED when cache has been updated. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost gather_facts: no become: yes tasks: - apt: name: htop update_cache: yes ``` ##### EXPECTED RESULTS As in ansible 2.1: ``` PLAY [localhost] *************************************************************** TASK [apt] ********************************************************************* ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 ``` Part of debug output: ``` ok: [localhost] => {""cache_update_time"": 1478621386, ""cache_updated"": true, ""changed"": false, ""invocation"": {""module_args"": {""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": null, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""htop"", ""only_upgrade"": false, ""package"": [""htop""], ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null}, ""module_name"": ""apt""}} ``` ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [apt] ********************************************************************* changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=1 unreachable=0 failed=0 ``` Part of debug output: ``` changed: [localhost] => { ""cache_update_time"": 1478621193, ""cache_updated"": true, ""changed"": true, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 0, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""htop"", ""only_upgrade"": false, ""package"": [ ""htop"" ], ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null }, ""module_name"": ""apt"" } } ```",1,apt module always returns changed when using update cache issue type bug report component name apt ansible version ansible configuration default os environment debian jessie gnu linux summary when installing packages and using update cache option apt module always returns state changed even if packages are already installed in my opinion expected behavior would be when using update cache and installing package s return status changed only when package has been installed or updated no matter if cache has been updated or not when using update cache without installing any package s return status changed when cache has been updated steps to reproduce hosts localhost gather facts no become yes tasks apt name htop update cache yes expected results as in ansible play task ok play recap localhost ok changed unreachable failed part of debug output ok cache update time cache updated true changed false invocation module args allow unauthenticated false autoremove false cache valid time null deb null default release null dpkg options force confdef force confold force false install recommends null name htop only upgrade false package purge false state present update cache true upgrade null module name apt actual results play task changed play recap localhost ok changed unreachable failed part of debug output changed cache update time cache updated true changed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null name htop only upgrade false package htop purge false state present update cache true upgrade null module name apt ,1 1901,6577555579.0,IssuesEvent,2017-09-12 01:44:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_asg: Usage of replace_all_instances errors out if no instances exist yet,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: ec2_asg ##### Ansible Version: ``` ansible 2.1.0 config file = /home/wholroyd/source/automation/ansible/ansible.cfg configured module search path = ./library ``` ##### Ansible Configuration: Error is strictly related to module logic. ##### Environment: Fedora 23 server with everything running locally ##### Summary: When attempting to standup an ASG and attempting to use the 'replace_all_instances' flag set to true, the module fails because it can't find any running instances. I'd like to be able to make this script re-runnable so that each time it's run, I'm able to change the related launch configuration. Not all ASGs have any running instances (think of two ASGs used in A/B testing with one empty), especially at creation time. ##### Steps To Reproduce: Create a launch configuration. Create an asg with `replace_all_instances: true` referencing the launch configuration ``` yaml - name: Ensure the existance of the new launch configuration ec2_lc: region: ""{{ region }}"" profile: ""{{ account }}"" state: present name: ""{{ vpc_name }}-Routing-{{ timestamp }}"" image_id: ""{{ cloud_ami.results[0].ami_id }}"" instance_type: ""t2.small"" instance_profile_name: ""{{ account }}-routing-monitor"" key_name: ""{{ account }}_{{ region }}"" assign_public_ip: yes security_groups: ""{{ existing_groups_list }}"" user_data: ""{{ lookup('file', 'roles/account_infrastructure/files/infrastructure-routing.sh') }}"" - name: ""Verify the virtual private cloud subnet to use"" ec2_vpc_subnet_facts: region: ""{{ region }}"" profile: ""{{ account }}"" filters: ""tag:Name"" : ""{{ vpc_name }}-Core"" ""availability-zone"" : ""{{ datacenter_regions_zones[region][0] }}"" register: account_cloud_vpc_subnets_a - debug: var=account_cloud_vpc_subnets_a - name: Ensure the existence of the new Auto Scale Group in zone A ec2_asg: region: ""{{ region }}"" profile: ""{{ account }}"" state: present name: ""{{ vpc_name }}-Routing-{{ datacenter_regions_zones[region][0] }}"" min_size: 0 max_size: 1 desired_capacity: 1 availability_zones: [ ""{{ datacenter_regions_zones[region][0] }}"" ] vpc_zone_identifier: ""{{ account_cloud_vpc_subnets_a.subnets[0].id }}"" launch_config_name: ""{{ vpc_name }}-Routing-{{ timestamp }}"" wait_for_instances: false replace_all_instances: true ``` ##### Expected Results: 1. Creation of ASG without any issue. 2. Modification of ASG and instances replaced (if they exist) ##### Actual Results: The module fails to create the ASG because it doesn't hold any instances yet. ``` ESTABLISH LOCAL CONNECTION FOR USER: wholroyd localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855 `"" )' localhost PUT /tmp/tmp9W7ZbD TO /home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg localhost EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg; rm -rf ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg"", line 3104, in main() File ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg"", line 3098, in main replace_changed, asg_properties=replace(connection, module) File ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg"", line 2832, in replace instances = props['instances'] KeyError: 'instances' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""ec2_asg""}, ""parsed"": false} to retry, use: --limit @account.retry ``` ",True,"ec2_asg: Usage of replace_all_instances errors out if no instances exist yet - ##### Issue Type: - Bug Report ##### Plugin Name: ec2_asg ##### Ansible Version: ``` ansible 2.1.0 config file = /home/wholroyd/source/automation/ansible/ansible.cfg configured module search path = ./library ``` ##### Ansible Configuration: Error is strictly related to module logic. ##### Environment: Fedora 23 server with everything running locally ##### Summary: When attempting to standup an ASG and attempting to use the 'replace_all_instances' flag set to true, the module fails because it can't find any running instances. I'd like to be able to make this script re-runnable so that each time it's run, I'm able to change the related launch configuration. Not all ASGs have any running instances (think of two ASGs used in A/B testing with one empty), especially at creation time. ##### Steps To Reproduce: Create a launch configuration. Create an asg with `replace_all_instances: true` referencing the launch configuration ``` yaml - name: Ensure the existance of the new launch configuration ec2_lc: region: ""{{ region }}"" profile: ""{{ account }}"" state: present name: ""{{ vpc_name }}-Routing-{{ timestamp }}"" image_id: ""{{ cloud_ami.results[0].ami_id }}"" instance_type: ""t2.small"" instance_profile_name: ""{{ account }}-routing-monitor"" key_name: ""{{ account }}_{{ region }}"" assign_public_ip: yes security_groups: ""{{ existing_groups_list }}"" user_data: ""{{ lookup('file', 'roles/account_infrastructure/files/infrastructure-routing.sh') }}"" - name: ""Verify the virtual private cloud subnet to use"" ec2_vpc_subnet_facts: region: ""{{ region }}"" profile: ""{{ account }}"" filters: ""tag:Name"" : ""{{ vpc_name }}-Core"" ""availability-zone"" : ""{{ datacenter_regions_zones[region][0] }}"" register: account_cloud_vpc_subnets_a - debug: var=account_cloud_vpc_subnets_a - name: Ensure the existence of the new Auto Scale Group in zone A ec2_asg: region: ""{{ region }}"" profile: ""{{ account }}"" state: present name: ""{{ vpc_name }}-Routing-{{ datacenter_regions_zones[region][0] }}"" min_size: 0 max_size: 1 desired_capacity: 1 availability_zones: [ ""{{ datacenter_regions_zones[region][0] }}"" ] vpc_zone_identifier: ""{{ account_cloud_vpc_subnets_a.subnets[0].id }}"" launch_config_name: ""{{ vpc_name }}-Routing-{{ timestamp }}"" wait_for_instances: false replace_all_instances: true ``` ##### Expected Results: 1. Creation of ASG without any issue. 2. Modification of ASG and instances replaced (if they exist) ##### Actual Results: The module fails to create the ASG because it doesn't hold any instances yet. ``` ESTABLISH LOCAL CONNECTION FOR USER: wholroyd localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855 `"" )' localhost PUT /tmp/tmp9W7ZbD TO /home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg localhost EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg; rm -rf ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg"", line 3104, in main() File ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg"", line 3098, in main replace_changed, asg_properties=replace(connection, module) File ""/home/wholroyd/.ansible/tmp/ansible-tmp-1456849907.05-42662666435855/ec2_asg"", line 2832, in replace instances = props['instances'] KeyError: 'instances' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""ec2_asg""}, ""parsed"": false} to retry, use: --limit @account.retry ``` ",1, asg usage of replace all instances errors out if no instances exist yet issue type bug report plugin name asg ansible version ansible config file home wholroyd source automation ansible ansible cfg configured module search path library ansible configuration error is strictly related to module logic environment fedora server with everything running locally summary when attempting to standup an asg and attempting to use the replace all instances flag set to true the module fails because it can t find any running instances i d like to be able to make this script re runnable so that each time it s run i m able to change the related launch configuration not all asgs have any running instances think of two asgs used in a b testing with one empty especially at creation time steps to reproduce create a launch configuration create an asg with replace all instances true referencing the launch configuration yaml name ensure the existance of the new launch configuration lc region region profile account state present name vpc name routing timestamp image id cloud ami results ami id instance type small instance profile name account routing monitor key name account region assign public ip yes security groups existing groups list user data lookup file roles account infrastructure files infrastructure routing sh name verify the virtual private cloud subnet to use vpc subnet facts region region profile account filters tag name vpc name core availability zone datacenter regions zones register account cloud vpc subnets a debug var account cloud vpc subnets a name ensure the existence of the new auto scale group in zone a asg region region profile account state present name vpc name routing datacenter regions zones min size max size desired capacity availability zones vpc zone identifier account cloud vpc subnets a subnets id launch config name vpc name routing timestamp wait for instances false replace all instances true expected results creation of asg without any issue modification of asg and instances replaced if they exist actual results the module fails to create the asg because it doesn t hold any instances yet establish local connection for user wholroyd localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put tmp to home wholroyd ansible tmp ansible tmp asg localhost exec bin sh c lang c lc all c lc messages c usr bin python home wholroyd ansible tmp ansible tmp asg rm rf home wholroyd ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file home wholroyd ansible tmp ansible tmp asg line in main file home wholroyd ansible tmp ansible tmp asg line in main replace changed asg properties replace connection module file home wholroyd ansible tmp ansible tmp asg line in replace instances props keyerror instances fatal failed changed false failed true invocation module name asg parsed false to retry use limit account retry ,1 1173,5095182264.0,IssuesEvent,2017-01-03 14:25:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_elb_lb intermittently failing to create an ELB with KeyError exception,affects_2.2 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb.py ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700) lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700) lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY When running a playbook to create multiple ELBs, some of the ELBs would intermittently fail to get created with a KeyError exception. I tracked this issue down to some data structure inconsistency in the _set_elb_listeners function. It would appear that the data type stored in self.elb.listeners is sometimes a dict and other times a tuple. I hacked around this issue but am not sure if my fix has broken anything else. See the diff below: ``` user@utf:/tmp$ diff ec2_elb_lb.py mine.py 755,758c755,760 < if existing_listener[0] == int(listener['load_balancer_port']): < existing_listener_found = self._api_listener_as_tuple(existing_listener) < break < --- > try: > if self._listener_as_tuple(existing_listener)[0] == int(listener['load_balancer_port']): > existing_listener_found = self._listener_as_tuple(existing_listener) > break > except KeyError: > self.module.fail_json(msg=""Ran into keyerror bug. self.elb.listeners is '%s'"" % self.elb.listeners) 776c778 < existing_listener_tuple = self._api_listener_as_tuple(existing_listener) --- > existing_listener_tuple = self._listener_as_tuple(existing_listener) ``` ##### STEPS TO REPRODUCE ``` - name: Create Control ELB ec2_elb_lb: name: ""Control-ELB"" state: present security_group_names: - ""{{ sg_name }}"" region: ""{{ aws_region }}"" purge_instance_ids: true idle_timeout: 60 subnets: - ""{{ subnet_id }}"" purge_listeners: true listeners: - protocol: tcp load_balancer_port: 443 instance_port: 443 health_check: ping_protocol: http ping_port: 80 ping_path: ""/health.html"" response_timeout: 5 interval: 10 unhealthy_threshold: 2 healthy_threshold: 2 - name: Create Service ELB ec2_elb_lb: name: ""Service-ELB"" state: present security_group_names: - ""{{ sg_name }}"" region: ""{{ aws_region }}"" purge_instance_ids: true idle_timeout: 3600 subnets: - ""{{ subnet_id }}"" purge_listeners: true listeners: - protocol: tcp load_balancer_port: 443 instance_port: 443 health_check: ping_protocol: https ping_port: 443 ping_path: ""/health.html"" response_timeout: 5 interval: 10 unhealthy_threshold: 2 healthy_threshold: 2 - name: Create Internal ELB ec2_elb_lb: name: ""Int-ELB"" state: present subnets: - ""{{ subnet_id }}"" security_group_names: - ""{{ sg_name }}"" region: ""{{ aws_region }}"" purge_instance_ids: true scheme: ""internal"" idle_timeout: 3600 purge_listeners: true listeners: - protocol: tcp load_balancer_port: 80 instance_port: 80 health_check: ping_protocol: tcp ping_port: 80 response_timeout: 5 interval: 10 unhealthy_threshold: 2 healthy_threshold: 2 ``` ##### EXPECTED RESULTS I expect all ELBs to be created properly with the given settings. ##### ACTUAL RESULTS The ELBs get created but Ansible quits with a Python exception which breaks further playbooks from running in a workflow. ``` TASK [Create Internal ELB] ****************************************** task path: /home/nsroot/ansible-work/playbooks/create-elbs.yml:96 Using module file /home/nsroot/ansible/lib/ansible/modules/core/cloud/amazon/ec2_elb_lb.py ESTABLISH LOCAL CONNECTION FOR USER: nsroot EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `"" && echo ansible-tmp-1474500949.27-185632929248572=""` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `"" ) && sleep 0' PUT /tmp/tmp9oxobX TO /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py EXEC /bin/sh -c 'chmod u+x /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py; rm -rf ""/home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 1354, in main() File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 1338, in main elb_man.ensure_ok() File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 410, in _do_op return op(*args, **kwargs) File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 484, in ensure_ok self._set_elb_listeners() File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 755, in _set_elb_listeners if existing_listener[0] == int(listener['load_balancer_port']): KeyError: 0 fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ec2_elb_lb"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 1354, in \n main()\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 1338, in main\n elb_man.ensure_ok()\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 410, in _do_op\n return op(*args, **kwargs)\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 484, in ensure_ok\n self._set_elb_listeners()\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 755, in _set_elb_listeners\n if existing_listener[0] == int(listener['load_balancer_port']):\nKeyError: 0\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @/home/nsroot/ansible-work/playbooks/create-elbs.retry ``` ",True,"ec2_elb_lb intermittently failing to create an ELB with KeyError exception - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb.py ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700) lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700) lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY When running a playbook to create multiple ELBs, some of the ELBs would intermittently fail to get created with a KeyError exception. I tracked this issue down to some data structure inconsistency in the _set_elb_listeners function. It would appear that the data type stored in self.elb.listeners is sometimes a dict and other times a tuple. I hacked around this issue but am not sure if my fix has broken anything else. See the diff below: ``` user@utf:/tmp$ diff ec2_elb_lb.py mine.py 755,758c755,760 < if existing_listener[0] == int(listener['load_balancer_port']): < existing_listener_found = self._api_listener_as_tuple(existing_listener) < break < --- > try: > if self._listener_as_tuple(existing_listener)[0] == int(listener['load_balancer_port']): > existing_listener_found = self._listener_as_tuple(existing_listener) > break > except KeyError: > self.module.fail_json(msg=""Ran into keyerror bug. self.elb.listeners is '%s'"" % self.elb.listeners) 776c778 < existing_listener_tuple = self._api_listener_as_tuple(existing_listener) --- > existing_listener_tuple = self._listener_as_tuple(existing_listener) ``` ##### STEPS TO REPRODUCE ``` - name: Create Control ELB ec2_elb_lb: name: ""Control-ELB"" state: present security_group_names: - ""{{ sg_name }}"" region: ""{{ aws_region }}"" purge_instance_ids: true idle_timeout: 60 subnets: - ""{{ subnet_id }}"" purge_listeners: true listeners: - protocol: tcp load_balancer_port: 443 instance_port: 443 health_check: ping_protocol: http ping_port: 80 ping_path: ""/health.html"" response_timeout: 5 interval: 10 unhealthy_threshold: 2 healthy_threshold: 2 - name: Create Service ELB ec2_elb_lb: name: ""Service-ELB"" state: present security_group_names: - ""{{ sg_name }}"" region: ""{{ aws_region }}"" purge_instance_ids: true idle_timeout: 3600 subnets: - ""{{ subnet_id }}"" purge_listeners: true listeners: - protocol: tcp load_balancer_port: 443 instance_port: 443 health_check: ping_protocol: https ping_port: 443 ping_path: ""/health.html"" response_timeout: 5 interval: 10 unhealthy_threshold: 2 healthy_threshold: 2 - name: Create Internal ELB ec2_elb_lb: name: ""Int-ELB"" state: present subnets: - ""{{ subnet_id }}"" security_group_names: - ""{{ sg_name }}"" region: ""{{ aws_region }}"" purge_instance_ids: true scheme: ""internal"" idle_timeout: 3600 purge_listeners: true listeners: - protocol: tcp load_balancer_port: 80 instance_port: 80 health_check: ping_protocol: tcp ping_port: 80 response_timeout: 5 interval: 10 unhealthy_threshold: 2 healthy_threshold: 2 ``` ##### EXPECTED RESULTS I expect all ELBs to be created properly with the given settings. ##### ACTUAL RESULTS The ELBs get created but Ansible quits with a Python exception which breaks further playbooks from running in a workflow. ``` TASK [Create Internal ELB] ****************************************** task path: /home/nsroot/ansible-work/playbooks/create-elbs.yml:96 Using module file /home/nsroot/ansible/lib/ansible/modules/core/cloud/amazon/ec2_elb_lb.py ESTABLISH LOCAL CONNECTION FOR USER: nsroot EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `"" && echo ansible-tmp-1474500949.27-185632929248572=""` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `"" ) && sleep 0' PUT /tmp/tmp9oxobX TO /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py EXEC /bin/sh -c 'chmod u+x /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py; rm -rf ""/home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 1354, in main() File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 1338, in main elb_man.ensure_ok() File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 410, in _do_op return op(*args, **kwargs) File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 484, in ensure_ok self._set_elb_listeners() File ""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py"", line 755, in _set_elb_listeners if existing_listener[0] == int(listener['load_balancer_port']): KeyError: 0 fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ec2_elb_lb"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 1354, in \n main()\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 1338, in main\n elb_man.ensure_ok()\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 410, in _do_op\n return op(*args, **kwargs)\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 484, in ensure_ok\n self._set_elb_listeners()\n File \""/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\"", line 755, in _set_elb_listeners\n if existing_listener[0] == int(listener['load_balancer_port']):\nKeyError: 0\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @/home/nsroot/ansible-work/playbooks/create-elbs.retry ``` ",1, elb lb intermittently failing to create an elb with keyerror exception issue type bug report component name elb lb py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary when running a playbook to create multiple elbs some of the elbs would intermittently fail to get created with a keyerror exception i tracked this issue down to some data structure inconsistency in the set elb listeners function it would appear that the data type stored in self elb listeners is sometimes a dict and other times a tuple i hacked around this issue but am not sure if my fix has broken anything else see the diff below user utf tmp diff elb lb py mine py if existing listener int listener existing listener found self api listener as tuple existing listener break try if self listener as tuple existing listener int listener existing listener found self listener as tuple existing listener break except keyerror self module fail json msg ran into keyerror bug self elb listeners is s self elb listeners existing listener tuple self api listener as tuple existing listener existing listener tuple self listener as tuple existing listener steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create control elb elb lb name control elb state present security group names sg name region aws region purge instance ids true idle timeout subnets subnet id purge listeners true listeners protocol tcp load balancer port instance port health check ping protocol http ping port ping path health html response timeout interval unhealthy threshold healthy threshold name create service elb elb lb name service elb state present security group names sg name region aws region purge instance ids true idle timeout subnets subnet id purge listeners true listeners protocol tcp load balancer port instance port health check ping protocol https ping port ping path health html response timeout interval unhealthy threshold healthy threshold name create internal elb elb lb name int elb state present subnets subnet id security group names sg name region aws region purge instance ids true scheme internal idle timeout purge listeners true listeners protocol tcp load balancer port instance port health check ping protocol tcp ping port response timeout interval unhealthy threshold healthy threshold expected results i expect all elbs to be created properly with the given settings actual results the elbs get created but ansible quits with a python exception which breaks further playbooks from running in a workflow task task path home nsroot ansible work playbooks create elbs yml using module file home nsroot ansible lib ansible modules core cloud amazon elb lb py establish local connection for user nsroot exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home nsroot ansible tmp ansible tmp elb lb py exec bin sh c chmod u x home nsroot ansible tmp ansible tmp home nsroot ansible tmp ansible tmp elb lb py sleep exec bin sh c usr bin python home nsroot ansible tmp ansible tmp elb lb py rm rf home nsroot ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible hbrcmm ansible module elb lb py line in main file tmp ansible hbrcmm ansible module elb lb py line in main elb man ensure ok file tmp ansible hbrcmm ansible module elb lb py line in do op return op args kwargs file tmp ansible hbrcmm ansible module elb lb py line in ensure ok self set elb listeners file tmp ansible hbrcmm ansible module elb lb py line in set elb listeners if existing listener int listener keyerror fatal failed changed false failed true invocation module name elb lb module stderr traceback most recent call last n file tmp ansible hbrcmm ansible module elb lb py line in n main n file tmp ansible hbrcmm ansible module elb lb py line in main n elb man ensure ok n file tmp ansible hbrcmm ansible module elb lb py line in do op n return op args kwargs n file tmp ansible hbrcmm ansible module elb lb py line in ensure ok n self set elb listeners n file tmp ansible hbrcmm ansible module elb lb py line in set elb listeners n if existing listener int listener nkeyerror n module stdout msg module failure to retry use limit home nsroot ansible work playbooks create elbs retry ,1 1771,6575050071.0,IssuesEvent,2017-09-11 14:53:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"docker_container module doesn't accept value ""all"" for parameter ""ports"" ",affects_2.1 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 14.04 x64 ##### SUMMARY Docker_container module doesn't accept value `all` for parameter `ports` ##### STEPS TO REPRODUCE ``` --- - hosts: localhost tasks: - name: run nginx docker container become: yes docker_container: name: nginx image: nginx ports: all ansible-playbook main.yaml ``` ##### EXPECTED RESULTS container is created and running, all exposed ports are mapped to random on the host. That's working in ansible 2.1.0.0 but then broken in 2.1.1.0 and 2.1.2.0 ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [run nginx docker container] ********************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 #docker ps 7262335b0297 nginx ""nginx -g 'daemon off"" 5 minutes ago Up 4 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp nginx ``` ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [run nginx docker container] ********************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: 'a' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 1928, in \n main()\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 1921, in main\n cm = ContainerManager(client)\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 1575, in __init__\n self.parameters = TaskParameters(client)\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 731, in __init__\n self.ports = self._parse_exposed_ports(self.published_ports)\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 990, in _parse_exposed_ports\n port = int(publish_port)\nValueError: invalid literal for int() with base 10: 'a'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/test/main.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ",True,"docker_container module doesn't accept value ""all"" for parameter ""ports"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 14.04 x64 ##### SUMMARY Docker_container module doesn't accept value `all` for parameter `ports` ##### STEPS TO REPRODUCE ``` --- - hosts: localhost tasks: - name: run nginx docker container become: yes docker_container: name: nginx image: nginx ports: all ansible-playbook main.yaml ``` ##### EXPECTED RESULTS container is created and running, all exposed ports are mapped to random on the host. That's working in ansible 2.1.0.0 but then broken in 2.1.1.0 and 2.1.2.0 ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [run nginx docker container] ********************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 #docker ps 7262335b0297 nginx ""nginx -g 'daemon off"" 5 minutes ago Up 4 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp nginx ``` ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [run nginx docker container] ********************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: 'a' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 1928, in \n main()\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 1921, in main\n cm = ContainerManager(client)\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 1575, in __init__\n self.parameters = TaskParameters(client)\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 731, in __init__\n self.ports = self._parse_exposed_ports(self.published_ports)\n File \""/tmp/ansible_r3iyN9/ansible_module_docker_container.py\"", line 990, in _parse_exposed_ports\n port = int(publish_port)\nValueError: invalid literal for int() with base 10: 'a'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/test/main.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ",1,docker container module doesn t accept value all for parameter ports issue type bug report component name docker container ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary docker container module doesn t accept value all for parameter ports steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost tasks name run nginx docker container become yes docker container name nginx image nginx ports all ansible playbook main yaml expected results container is created and running all exposed ports are mapped to random on the host that s working in ansible but then broken in and play task ok task changed play recap localhost ok changed unreachable failed docker ps nginx nginx g daemon off minutes ago up minutes tcp tcp nginx actual results play task ok task an exception occurred during task execution to see the full traceback use vvv the error was valueerror invalid literal for int with base a fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible ansible module docker container py line in n main n file tmp ansible ansible module docker container py line in main n cm containermanager client n file tmp ansible ansible module docker container py line in init n self parameters taskparameters client n file tmp ansible ansible module docker container py line in init n self ports self parse exposed ports self published ports n file tmp ansible ansible module docker container py line in parse exposed ports n port int publish port nvalueerror invalid literal for int with base a n module stdout msg module failure no more hosts left to retry use limit home vagrant test main retry play recap localhost ok changed unreachable failed ,1 1221,5216898376.0,IssuesEvent,2017-01-26 12:00:22,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_group should allow for security group revocations,affects_2.1 aws cloud feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### OS / ENVIRONMENT N/A ##### SUMMARY ec2_group allows us to create security groups, to remove security groups and to add new rules to security groups. What it doesn't allow us to do is to remove rules, so we can't easily revoke rules that have been set. This feature is already available in AWS and boto3 ([relevant documentation](https://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.SecurityGroup.revoke_ingress). ##### STEPS TO REPRODUCE Let's say you have a playbook that launches a new EC2 instance. That role also adds a new rule to an existing ELB, so that the new EC2 instance can access it. If you now have a `cleanup` playbook that destroys that EC2 instance, you ideally want to remove the rule you just added to the ELB. A simple revoke would be ideal, but that feature is not available. We could have two new options to the module, `revoke_rules` and `revoke_rules_egress`, to take care of this. ",True,"ec2_group should allow for security group revocations - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### OS / ENVIRONMENT N/A ##### SUMMARY ec2_group allows us to create security groups, to remove security groups and to add new rules to security groups. What it doesn't allow us to do is to remove rules, so we can't easily revoke rules that have been set. This feature is already available in AWS and boto3 ([relevant documentation](https://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.SecurityGroup.revoke_ingress). ##### STEPS TO REPRODUCE Let's say you have a playbook that launches a new EC2 instance. That role also adds a new rule to an existing ELB, so that the new EC2 instance can access it. If you now have a `cleanup` playbook that destroys that EC2 instance, you ideally want to remove the rule you just added to the ELB. A simple revoke would be ideal, but that feature is not available. We could have two new options to the module, `revoke_rules` and `revoke_rules_egress`, to take care of this. ",1, group should allow for security group revocations issue type feature idea component name group ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment n a summary group allows us to create security groups to remove security groups and to add new rules to security groups what it doesn t allow us to do is to remove rules so we can t easily revoke rules that have been set this feature is already available in aws and steps to reproduce let s say you have a playbook that launches a new instance that role also adds a new rule to an existing elb so that the new instance can access it if you now have a cleanup playbook that destroys that instance you ideally want to remove the rule you just added to the elb a simple revoke would be ideal but that feature is not available we could have two new options to the module revoke rules and revoke rules egress to take care of this ,1 1678,6574117695.0,IssuesEvent,2017-09-11 11:33:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Support docker --sysctls options in docker_container,affects_2.3 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/docker_container ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/kassian/bundle/ansible/core', '/home/kassian/bundle/ansible/extras'] ``` ##### SUMMARY docker has new '--sysctl=' options with 1.12 release, docker-py also came with sysctls options in release 1.10, we should add this feature in docker module ",True,"Support docker --sysctls options in docker_container - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/docker_container ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/kassian/bundle/ansible/core', '/home/kassian/bundle/ansible/extras'] ``` ##### SUMMARY docker has new '--sysctl=' options with 1.12 release, docker-py also came with sysctls options in release 1.10, we should add this feature in docker module ",1,support docker sysctls options in docker container issue type bug report component name cloud docker container ansible version ansible config file etc ansible ansible cfg configured module search path summary docker has new sysctl options with release docker py also came with sysctls options in release we should add this feature in docker module ,1 1840,6577374169.0,IssuesEvent,2017-09-12 00:27:56,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,nxos_config issue with creating L2 vlans using transport cli,affects_2.2 bug_report networking P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module - nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 1861151fa4) last updated 2016/05/20 16:06:57 (GMT +000) lib/ansible/modules/core: (detached HEAD d3097bf580) last updated 2016/05/20 16:12:00 (GMT +000) lib/ansible/modules/extras: (detached HEAD ce5a9b6c5f) last updated 2016/05/20 16:12:01 (GMT +000) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Cisco NXOS - n3000-uk9.6.0.2.U5.2.bin ##### SUMMARY When running nxos_config with transport cli L2 vlans are created, however, name is ignored. Currently, I believe this is due to the need to create the parent, but when no name is supplied the vlan is not in the running-configuration. ``` show run | i vlan ``` ##### STEPS TO REPRODUCE ``` - name: Setup Bridging (Vlans) nxos_config: lines: - 'name {{ item.name }}' parents: - 'vlan {{ item.id }}' host: ""{{ inventory_hostname }}"" username: ""{{ cisco.nexus.username }}"" password: ""{{ cisco.nexus.password }}"" transport: cli use_ssl: yes validate_certs: false when: vlans is defined with_items: - ""{{ vlans }}"" ``` group_vars or host_vars ``` --- vlans: - id: 10 name: Ansible ``` ##### EXPECTED RESULTS ``` TASK [Setup Bridging (Vlans)] ************************************************** changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'}) ``` On NXOS: show run vlan ``` vlan 10 name Ansible ``` ##### ACTUAL RESULTS ``` TASK [Setup Bridging (Vlans)] ************************************************** changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'}) ``` On NXOS: show run vlan ``` vlan 10 ``` ",True,"nxos_config issue with creating L2 vlans using transport cli - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module - nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 1861151fa4) last updated 2016/05/20 16:06:57 (GMT +000) lib/ansible/modules/core: (detached HEAD d3097bf580) last updated 2016/05/20 16:12:00 (GMT +000) lib/ansible/modules/extras: (detached HEAD ce5a9b6c5f) last updated 2016/05/20 16:12:01 (GMT +000) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Cisco NXOS - n3000-uk9.6.0.2.U5.2.bin ##### SUMMARY When running nxos_config with transport cli L2 vlans are created, however, name is ignored. Currently, I believe this is due to the need to create the parent, but when no name is supplied the vlan is not in the running-configuration. ``` show run | i vlan ``` ##### STEPS TO REPRODUCE ``` - name: Setup Bridging (Vlans) nxos_config: lines: - 'name {{ item.name }}' parents: - 'vlan {{ item.id }}' host: ""{{ inventory_hostname }}"" username: ""{{ cisco.nexus.username }}"" password: ""{{ cisco.nexus.password }}"" transport: cli use_ssl: yes validate_certs: false when: vlans is defined with_items: - ""{{ vlans }}"" ``` group_vars or host_vars ``` --- vlans: - id: 10 name: Ansible ``` ##### EXPECTED RESULTS ``` TASK [Setup Bridging (Vlans)] ************************************************** changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'}) ``` On NXOS: show run vlan ``` vlan 10 name Ansible ``` ##### ACTUAL RESULTS ``` TASK [Setup Bridging (Vlans)] ************************************************** changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'}) ``` On NXOS: show run vlan ``` vlan 10 ``` ",1,nxos config issue with creating vlans using transport cli issue type bug report component name module nxos config ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific cisco nxos bin summary when running nxos config with transport cli vlans are created however name is ignored currently i believe this is due to the need to create the parent but when no name is supplied the vlan is not in the running configuration show run i vlan steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name setup bridging vlans nxos config lines name item name parents vlan item id host inventory hostname username cisco nexus username password cisco nexus password transport cli use ssl yes validate certs false when vlans is defined with items vlans group vars or host vars vlans id name ansible expected results task changed item u id u name u ansible on nxos show run vlan vlan name ansible actual results task changed item u id u name u ansible on nxos show run vlan vlan ,1 1768,6575035720.0,IssuesEvent,2017-09-11 14:50:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Docs on template module jinja2 vars `template_path` vs `template_fullpath` are confusing,affects_2.2 docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME Template module ##### ANSIBLE VERSION Ansible 2.2 ##### SUMMARY The explanation of `template_path` vs `template_fullpath` are confusing (https://github.com/ansible/ansible-modules-core/blob/devel/files/template.py#L32-L34): > ...`template_path` the absolute path of the template, `template_fullpath` is the absolute path of the template... It's not clear if both variables are the same, or if one is a relative path and one a full path. Plus the wording reads awkwardly. ",True,"Docs on template module jinja2 vars `template_path` vs `template_fullpath` are confusing - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME Template module ##### ANSIBLE VERSION Ansible 2.2 ##### SUMMARY The explanation of `template_path` vs `template_fullpath` are confusing (https://github.com/ansible/ansible-modules-core/blob/devel/files/template.py#L32-L34): > ...`template_path` the absolute path of the template, `template_fullpath` is the absolute path of the template... It's not clear if both variables are the same, or if one is a relative path and one a full path. Plus the wording reads awkwardly. ",1,docs on template module vars template path vs template fullpath are confusing issue type documentation report component name template module ansible version ansible summary the explanation of template path vs template fullpath are confusing template path the absolute path of the template template fullpath is the absolute path of the template it s not clear if both variables are the same or if one is a relative path and one a full path plus the wording reads awkwardly ,1 742,4349450299.0,IssuesEvent,2016-07-30 15:37:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"apt_rpm.py does not support installation using ""/usr/bin/rpm"" as mentioned ",feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea apt_rpm does not support managing packages with rpm on linux distros apt_rpm - apt_rpm package manager ##### COMPONENT NAME apt_rpm ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION NO but may be required if to support additinal capabilities of RPM command like non root installation ##### OS / ENVIRONMENT Running Ansible from and to : RHEL 7.2 ##### SUMMARY As written for package apt_rpm apt_rpm documentation says it manages installation,update and removal using low-level (rpm) and high-level (apt-get) package manager binaries however if i have to use this package for my linux distro like fedora/RHEL then i dont have binary apt-get installed on my machine , but i do have ""/usr/bin/rpm"" so apt_rpm module is supposed to work ""- apt_rpm: pkg=foo state=present"" but it throws error saying {""changed"": false, ""cmd"": ""/usr/bin/apt-get -y install /home/xyz.rpm '>' /dev/null"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} this is becasue there is bug in code line numbers 127 and 153 https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/apt_rpm.py apt_rpm.py is supporting only APT_PATH and not RPM_PATH (line no 73) line number 153 should be fixed and ""or"" condition should be changed to ""and"" also apt_rpm.py needs to support RPM_PATH since this package uses RPM method to manage packages so facility to support RPM installation upgrade removal via non root user should also be added in apt_rpm.py code because rpm supports non root root installation using options like --dbpath --relocate http://docs.ansible.com/ansible/apt_rpm_module.html ##### STEPS TO REPRODUCE use below playbook on any fedora like OS and try to install wget package but it gives error : {""changed"": false, ""cmd"": ""/usr/bin/apt-get -y install /home/xyz.rpm '>' /dev/null"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} ``` --- - hosts: RMP tasks: - apt_rpm: pkg=wget state=present - shell: echo ""hello world"" ``` ##### EXPECTED RESULTS i was expecting rpm to get installted on my RHEL machine using ""- apt_rpm: pkg=foo state=present"" ##### ACTUAL RESULTS Yes ``` {""changed"": false, ""cmd"": ""/usr/bin/apt-get -y install /home/xyz.rpm '>' /dev/null"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} ``` ",True,"apt_rpm.py does not support installation using ""/usr/bin/rpm"" as mentioned - ##### ISSUE TYPE - Feature Idea apt_rpm does not support managing packages with rpm on linux distros apt_rpm - apt_rpm package manager ##### COMPONENT NAME apt_rpm ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION NO but may be required if to support additinal capabilities of RPM command like non root installation ##### OS / ENVIRONMENT Running Ansible from and to : RHEL 7.2 ##### SUMMARY As written for package apt_rpm apt_rpm documentation says it manages installation,update and removal using low-level (rpm) and high-level (apt-get) package manager binaries however if i have to use this package for my linux distro like fedora/RHEL then i dont have binary apt-get installed on my machine , but i do have ""/usr/bin/rpm"" so apt_rpm module is supposed to work ""- apt_rpm: pkg=foo state=present"" but it throws error saying {""changed"": false, ""cmd"": ""/usr/bin/apt-get -y install /home/xyz.rpm '>' /dev/null"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} this is becasue there is bug in code line numbers 127 and 153 https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/apt_rpm.py apt_rpm.py is supporting only APT_PATH and not RPM_PATH (line no 73) line number 153 should be fixed and ""or"" condition should be changed to ""and"" also apt_rpm.py needs to support RPM_PATH since this package uses RPM method to manage packages so facility to support RPM installation upgrade removal via non root user should also be added in apt_rpm.py code because rpm supports non root root installation using options like --dbpath --relocate http://docs.ansible.com/ansible/apt_rpm_module.html ##### STEPS TO REPRODUCE use below playbook on any fedora like OS and try to install wget package but it gives error : {""changed"": false, ""cmd"": ""/usr/bin/apt-get -y install /home/xyz.rpm '>' /dev/null"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} ``` --- - hosts: RMP tasks: - apt_rpm: pkg=wget state=present - shell: echo ""hello world"" ``` ##### EXPECTED RESULTS i was expecting rpm to get installted on my RHEL machine using ""- apt_rpm: pkg=foo state=present"" ##### ACTUAL RESULTS Yes ``` {""changed"": false, ""cmd"": ""/usr/bin/apt-get -y install /home/xyz.rpm '>' /dev/null"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} ``` ",1,apt rpm py does not support installation using usr bin rpm as mentioned issue type feature idea apt rpm does not support managing packages with rpm on linux distros apt rpm apt rpm package manager component name apt rpm ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no but may be required if to support additinal capabilities of rpm command like non root installation os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running ansible from and to rhel summary as written for package apt rpm apt rpm documentation says it manages installation update and removal using low level rpm and high level apt get package manager binaries however if i have to use this package for my linux distro like fedora rhel then i dont have binary apt get installed on my machine but i do have usr bin rpm so apt rpm module is supposed to work apt rpm pkg foo state present but it throws error saying changed false cmd usr bin apt get y install home xyz rpm dev null failed true msg no such file or directory rc this is becasue there is bug in code line numbers and apt rpm py is supporting only apt path and not rpm path line no line number should be fixed and or condition should be changed to and also apt rpm py needs to support rpm path since this package uses rpm method to manage packages so facility to support rpm installation upgrade removal via non root user should also be added in apt rpm py code because rpm supports non root root installation using options like dbpath relocate steps to reproduce use below playbook on any fedora like os and try to install wget package but it gives error changed false cmd usr bin apt get y install home xyz rpm dev null failed true msg no such file or directory rc hosts rmp tasks apt rpm pkg wget state present shell echo hello world expected results i was expecting rpm to get installted on my rhel machine using apt rpm pkg foo state present actual results yes changed false cmd usr bin apt get y install home xyz rpm dev null failed true msg no such file or directory rc ,1 1117,4989181366.0,IssuesEvent,2016-12-08 10:55:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Yum module fails when installing via URL if a yum http_proxy is required,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.1.2.0 (v2.1.2.0-1 29f2f26278) last updated 2016/11/28 11:09:19 (GMT +100) lib/ansible/modules/core: (detached HEAD 17ee1cfaf9) last updated 2016/11/28 11:58:46 (GMT +100) lib/ansible/modules/extras: (detached HEAD d312f34d9b) last updated 2016/11/28 11:58:47 (GMT +100) config file = /home/ansible/opencast-ansible/ansible.cfg configured module search path = ['/home/ansible/src/ansible/lib/ansible/modules/core', '/home/ansible/src/ansible/lib/anisble/modules/extra'] ``` ##### OS / ENVIRONMENT Red Hat 6 ##### SUMMARY The Yum module fails when trying make sure an RPM is installed via a URL if it must do so via a http proxy. I found that line 246 in the yum module it tries to fetch the url directly possibly circumventing the proxy settings: https://github.com/ansible/ansible-modules-core/blob/c67315fc4e0c3bc5cb519ef2651cccf4bc659780/packaging/os/yum.py#L246 Note: the proxy settings are also set in /etc/profile.d/proxy.sh but I guess these are not picked up when when the remote module code is executed. ##### STEPS TO REPRODUCE myserver must use a http_proxy which has been correctly set in /etc/yum.conf I'm using a playbook but the single command also fails: ``` ansible myserver -i inventory/ -s -m yum -a ""state=present name=http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm"" ``` Note: it doesn't matter if the rpm is already cached in /var/tmp/yum-### or already installed or not. Note: running the yum command below on myserver works as expected. ``` sudo yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm ```` ##### EXPECTED RESULTS SUCCESS or Changed ##### ACTUAL RESULTS Fails to confirm that it is installed ``` fatal: [myserver]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failure downloading http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm, 'NoneType' object has no attribute 'read'""} ``` ",True,"Yum module fails when installing via URL if a yum http_proxy is required - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.1.2.0 (v2.1.2.0-1 29f2f26278) last updated 2016/11/28 11:09:19 (GMT +100) lib/ansible/modules/core: (detached HEAD 17ee1cfaf9) last updated 2016/11/28 11:58:46 (GMT +100) lib/ansible/modules/extras: (detached HEAD d312f34d9b) last updated 2016/11/28 11:58:47 (GMT +100) config file = /home/ansible/opencast-ansible/ansible.cfg configured module search path = ['/home/ansible/src/ansible/lib/ansible/modules/core', '/home/ansible/src/ansible/lib/anisble/modules/extra'] ``` ##### OS / ENVIRONMENT Red Hat 6 ##### SUMMARY The Yum module fails when trying make sure an RPM is installed via a URL if it must do so via a http proxy. I found that line 246 in the yum module it tries to fetch the url directly possibly circumventing the proxy settings: https://github.com/ansible/ansible-modules-core/blob/c67315fc4e0c3bc5cb519ef2651cccf4bc659780/packaging/os/yum.py#L246 Note: the proxy settings are also set in /etc/profile.d/proxy.sh but I guess these are not picked up when when the remote module code is executed. ##### STEPS TO REPRODUCE myserver must use a http_proxy which has been correctly set in /etc/yum.conf I'm using a playbook but the single command also fails: ``` ansible myserver -i inventory/ -s -m yum -a ""state=present name=http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm"" ``` Note: it doesn't matter if the rpm is already cached in /var/tmp/yum-### or already installed or not. Note: running the yum command below on myserver works as expected. ``` sudo yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm ```` ##### EXPECTED RESULTS SUCCESS or Changed ##### ACTUAL RESULTS Fails to confirm that it is installed ``` fatal: [myserver]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failure downloading http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm, 'NoneType' object has no attribute 'read'""} ``` ",1,yum module fails when installing via url if a yum http proxy is required issue type bug report component name yum ansible version ansible last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home ansible opencast ansible ansible cfg configured module search path os environment red hat summary the yum module fails when trying make sure an rpm is installed via a url if it must do so via a http proxy i found that line in the yum module it tries to fetch the url directly possibly circumventing the proxy settings note the proxy settings are also set in etc profile d proxy sh but i guess these are not picked up when when the remote module code is executed steps to reproduce myserver must use a http proxy which has been correctly set in etc yum conf i m using a playbook but the single command also fails ansible myserver i inventory s m yum a state present name note it doesn t matter if the rpm is already cached in var tmp yum or already installed or not note running the yum command below on myserver works as expected sudo yum install expected results success or changed actual results fails to confirm that it is installed fatal failed changed false failed true msg failure downloading nonetype object has no attribute read ,1 1907,6577567202.0,IssuesEvent,2017-09-12 01:48:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add support for StepScaling policies to ec2_scaling_policy module,affects_1.9 aws bug_report cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: ec2_scaling_policy ##### Ansible Version: ``` ansible 1.9.3 configured module search path = library ``` ##### Environment: N/A ##### Summary: Seems like ansible does not allow me to create Step Scaling policies. Using the `ec2_scaling_policy` module always results in a simple policy and I'm not sure if I'm misunderstanding the parameters in order to create a StepScaling policy as described here: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html (Step Scaling Policies section) You can also find documentation on the AWS CLI docs here: http://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-scaling-policy.html (describes in the `--policy-type` parameter) ##### Steps To Reproduce: Simply try to create a scaling policy with the needed alarm metric and notice how AWS shows a simple scaling policy on the ASG. ##### Expected Results: N/A ##### Actual Results: N/A ",True,"Add support for StepScaling policies to ec2_scaling_policy module - ##### Issue Type: - Feature Idea ##### Plugin Name: ec2_scaling_policy ##### Ansible Version: ``` ansible 1.9.3 configured module search path = library ``` ##### Environment: N/A ##### Summary: Seems like ansible does not allow me to create Step Scaling policies. Using the `ec2_scaling_policy` module always results in a simple policy and I'm not sure if I'm misunderstanding the parameters in order to create a StepScaling policy as described here: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html (Step Scaling Policies section) You can also find documentation on the AWS CLI docs here: http://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-scaling-policy.html (describes in the `--policy-type` parameter) ##### Steps To Reproduce: Simply try to create a scaling policy with the needed alarm metric and notice how AWS shows a simple scaling policy on the ASG. ##### Expected Results: N/A ##### Actual Results: N/A ",1,add support for stepscaling policies to scaling policy module issue type feature idea plugin name scaling policy ansible version ansible configured module search path library environment n a summary seems like ansible does not allow me to create step scaling policies using the scaling policy module always results in a simple policy and i m not sure if i m misunderstanding the parameters in order to create a stepscaling policy as described here step scaling policies section you can also find documentation on the aws cli docs here describes in the policy type parameter steps to reproduce simply try to create a scaling policy with the needed alarm metric and notice how aws shows a simple scaling policy on the asg expected results n a actual results n a ,1 1871,6577493724.0,IssuesEvent,2017-09-12 01:18:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_group check mode is inaccurate,affects_2.0 aws bug_report cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY When running ansible with `--check`, all security groups are listed as having changes. ##### STEPS TO REPRODUCE You should be able to reproduce this with [the example task](https://docs.ansible.com/ansible/ec2_group_module.html#examples), or anything simpler. ##### EXPECTED RESULTS I expected the comparison to be made between the local declarations and the currently existing definitions in AWS, and only those that would normally be changed would show changes. Additionally, it'd be really nice if `--diff` produced any sort of output indicating what the diff between them is. ##### ACTUAL RESULTS Ansible reports changes to every security group, with no additional information. Reading through [the module](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2_group.py), there are a bunch of places where `check_mode` just causes a conditional to be skipped, and so probably some of that logic needs to be moved out of the conditional. ",True,"ec2_group check mode is inaccurate - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY When running ansible with `--check`, all security groups are listed as having changes. ##### STEPS TO REPRODUCE You should be able to reproduce this with [the example task](https://docs.ansible.com/ansible/ec2_group_module.html#examples), or anything simpler. ##### EXPECTED RESULTS I expected the comparison to be made between the local declarations and the currently existing definitions in AWS, and only those that would normally be changed would show changes. Additionally, it'd be really nice if `--diff` produced any sort of output indicating what the diff between them is. ##### ACTUAL RESULTS Ansible reports changes to every security group, with no additional information. Reading through [the module](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2_group.py), there are a bunch of places where `check_mode` just causes a conditional to be skipped, and so probably some of that logic needs to be moved out of the conditional. ",1, group check mode is inaccurate issue type feature idea component name group ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment n a summary when running ansible with check all security groups are listed as having changes steps to reproduce you should be able to reproduce this with or anything simpler expected results i expected the comparison to be made between the local declarations and the currently existing definitions in aws and only those that would normally be changed would show changes additionally it d be really nice if diff produced any sort of output indicating what the diff between them is actual results ansible reports changes to every security group with no additional information reading through there are a bunch of places where check mode just causes a conditional to be skipped and so probably some of that logic needs to be moved out of the conditional ,1 1818,6577323575.0,IssuesEvent,2017-09-12 00:06:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vol does not allow adding all available volume types,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vol ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux: `CentOS 7.1`, `Ubuntu 14.04`, `Ubuntu 16.04` arch: `x86_64` ##### SUMMARY if you want to attach a volume of type not listed in ec2_vol.py explicitly your task fails. ##### STEPS TO REPRODUCE The following playbook would fail ``` - name: attach new volume hosts: localhost tasks: module: ec2_vol instance: '' region: '' device_name: '/dev/xvdf' # or whatever volume_size: 40 volume_type: 'st1' iops: 100 delete_on_termination: true ``` ##### EXPECTED RESULTS I would expect the task to succeed :) ##### ACTUAL RESULTS ``` ansible-playbook -i hosts -e playbooks/ec2node-provision.yml TASK [myuser.ec2node : Attach local volumes (storage_type=local)] ************** task path: /home/user/workspace/ansible/ec2/playbooks/roles/myuser.ec2node/tasks/ec2_setup.yml:43 fatal: [localhost]: FAILED! => {""failed"": true, ""msg"": ""value of volume_type must be one of: standard,gp2,io1, got: st1""} to retry, use: --limit @playbooks/ec2node-provision.retry PLAY RECAP ********************************************************************* ``` The code of `ec2_vol.py` clearly limits types in `main()` ",True,"ec2_vol does not allow adding all available volume types - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vol ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux: `CentOS 7.1`, `Ubuntu 14.04`, `Ubuntu 16.04` arch: `x86_64` ##### SUMMARY if you want to attach a volume of type not listed in ec2_vol.py explicitly your task fails. ##### STEPS TO REPRODUCE The following playbook would fail ``` - name: attach new volume hosts: localhost tasks: module: ec2_vol instance: '' region: '' device_name: '/dev/xvdf' # or whatever volume_size: 40 volume_type: 'st1' iops: 100 delete_on_termination: true ``` ##### EXPECTED RESULTS I would expect the task to succeed :) ##### ACTUAL RESULTS ``` ansible-playbook -i hosts -e playbooks/ec2node-provision.yml TASK [myuser.ec2node : Attach local volumes (storage_type=local)] ************** task path: /home/user/workspace/ansible/ec2/playbooks/roles/myuser.ec2node/tasks/ec2_setup.yml:43 fatal: [localhost]: FAILED! => {""failed"": true, ""msg"": ""value of volume_type must be one of: standard,gp2,io1, got: st1""} to retry, use: --limit @playbooks/ec2node-provision.retry PLAY RECAP ********************************************************************* ``` The code of `ec2_vol.py` clearly limits types in `main()` ",1, vol does not allow adding all available volume types issue type bug report component name vol ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment linux centos ubuntu ubuntu arch summary if you want to attach a volume of type not listed in vol py explicitly your task fails steps to reproduce the following playbook would fail name attach new volume hosts localhost tasks module vol instance region device name dev xvdf or whatever volume size volume type iops delete on termination true expected results i would expect the task to succeed actual results ansible playbook i hosts e playbooks provision yml task task path home user workspace ansible playbooks roles myuser tasks setup yml fatal failed failed true msg value of volume type must be one of standard got to retry use limit playbooks provision retry play recap the code of vol py clearly limits types in main ,1 1797,6575903229.0,IssuesEvent,2017-09-11 17:46:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add limit of number of backup files to file modules with backup option,affects_2.3 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME File modules with `backup` option: `copy`, `template`, `lineinfile`, `ini_file`, `replace`. ##### SUMMARY If you are using `backup` option for a long time, large number of backup files is piled up in config directory: ``` service.conf service.conf.2016-03-09@12:20:22 service.conf.2016-03-15@18:17:20~ service.conf.2016-03-21@17:59:52~ service.conf.2016-03-24@19:19:26~ ... tons and tons and tons of backup files here ... ``` In my use case backup files are used to be able to quick-revert manually, if something got wrong. So old files are not interesting, they just become obsolete garbage. It would be very convenient to have options `backup_max_age` and `backup_max_files` , that will automatically clean up old backup files based on their age(in days) or total number. ##### STEPS TO REPRODUCE Something like that: ``` yaml - name: install service config template: src=service.cfg dest=/etc/service/service.cfg mode=0644 backup=yes backup_max_age=14 ``` If service.cfg was changed - creates new backup file and clears up backup files older than 2 weeks. ",True,"Add limit of number of backup files to file modules with backup option - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME File modules with `backup` option: `copy`, `template`, `lineinfile`, `ini_file`, `replace`. ##### SUMMARY If you are using `backup` option for a long time, large number of backup files is piled up in config directory: ``` service.conf service.conf.2016-03-09@12:20:22 service.conf.2016-03-15@18:17:20~ service.conf.2016-03-21@17:59:52~ service.conf.2016-03-24@19:19:26~ ... tons and tons and tons of backup files here ... ``` In my use case backup files are used to be able to quick-revert manually, if something got wrong. So old files are not interesting, they just become obsolete garbage. It would be very convenient to have options `backup_max_age` and `backup_max_files` , that will automatically clean up old backup files based on their age(in days) or total number. ##### STEPS TO REPRODUCE Something like that: ``` yaml - name: install service config template: src=service.cfg dest=/etc/service/service.cfg mode=0644 backup=yes backup_max_age=14 ``` If service.cfg was changed - creates new backup file and clears up backup files older than 2 weeks. ",1,add limit of number of backup files to file modules with backup option issue type feature idea component name file modules with backup option copy template lineinfile ini file replace summary if you are using backup option for a long time large number of backup files is piled up in config directory service conf service conf service conf service conf service conf tons and tons and tons of backup files here in my use case backup files are used to be able to quick revert manually if something got wrong so old files are not interesting they just become obsolete garbage it would be very convenient to have options backup max age and backup max files that will automatically clean up old backup files based on their age in days or total number steps to reproduce something like that yaml name install service config template src service cfg dest etc service service cfg mode backup yes backup max age if service cfg was changed creates new backup file and clears up backup files older than weeks ,1 1748,6574942739.0,IssuesEvent,2017-09-11 14:34:03,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_image_facts does not search for tag,affects_2.1 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_image_facts ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 14.04.5/Debian Jessie Docker Version: ``` Client: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: Tue Oct 11 17:43:41 2016 OS/Arch: linux/amd64 Server: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: Tue Oct 11 17:43:41 2016 OS/Arch: linux/amd64 ``` ##### SUMMARY docker_image_facts retuirns all named images regardless of tag, even when specifying a tag in the name attribute of the module. ##### STEPS TO REPRODUCE Specify tag in docker-image_facts name attribute using `:` Latest is returned in the image array, along with any other debian image in the local registry. ``` - hosts: localhost connection: local tasks: - docker_image_facts: name=""debian:latest"" - docker_image_facts: name=""debian:not-latest"" ``` ##### EXPECTED RESULTS The latest image not being returned in the second task. ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [docker_image_facts] ****************************************************** PLAYBOOK: docker-test.yml ****************************************************** 1 plays in docker-test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466 `"" && echo ansible-tmp-1476812867.36-239285172461466=""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpLHbQfE TO /root/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466/setup <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466/setup; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [docker_image_facts] ****************************************************** task path: /root/docker-test.yml:4 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847 `"" && echo ansible-tmp-1476812867.85-152102133665847=""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpvxleDe TO /root/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847/docker_image_facts <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847/docker_image_facts; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""images"": [{""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/bash""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Container"": ""27a6f130fb20b127801c1179ff49cc6adcb252c6d91f5955421e4c66a72ced31"", ""ContainerConfig"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/sh"", ""-c"", ""#(nop) "", ""CMD [\""/bin/bash\""]""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-09-23T18:08:51.133779867Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": {""Data"": {""DeviceId"": ""2"", ""DeviceName"": ""docker-202:2-403255-60c0ce068752b2eacce3a5c0985bd008bb4641443697174f5eab997d09c81ef6"", ""DeviceSize"": ""10737418240""}, ""Name"": ""devicemapper""}, ""Id"": ""sha256:ddf73f48a05d97e4f473d0b4ccb53383cbb0647d10e34b62d68bfc859cc6bcf9"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [""debian@sha256:677f184a5969847c0ad91d30cf1f0b925cd321e6c66e3ed5fbf9858f58425d1a""], ""RepoTags"": [""debian:latest""], ""RootFS"": {""Layers"": [""sha256:142a601d97936307e75220c35dde0348971a9584c21e7cb42e1f7004005432ab""], ""Type"": ""layers""}, ""Size"": 122988258, ""VirtualSize"": 122988258}], ""invocation"": {""module_args"": {""api_version"": null, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""docker_host"": null, ""filter_logger"": false, ""key_path"": null, ""name"": [""debian:latest""], ""ssl_version"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null}, ""module_name"": ""docker_image_facts""}} TASK [docker_image_facts] ****************************************************** task path: /root/docker-test.yml:6 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637 `"" && echo ansible-tmp-1476812868.02-253123017828637=""` echo $HOME/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmp0oNjN1 TO /root/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637/docker_image_facts <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637/docker_image_facts; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""images"": [{""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/bash""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Container"": ""27a6f130fb20b127801c1179ff49cc6adcb252c6d91f5955421e4c66a72ced31"", ""ContainerConfig"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/sh"", ""-c"", ""#(nop) "", ""CMD [\""/bin/bash\""]""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-09-23T18:08:51.133779867Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": {""Data"": {""DeviceId"": ""2"", ""DeviceName"": ""docker-202:2-403255-60c0ce068752b2eacce3a5c0985bd008bb4641443697174f5eab997d09c81ef6"", ""DeviceSize"": ""10737418240""}, ""Name"": ""devicemapper""}, ""Id"": ""sha256:ddf73f48a05d97e4f473d0b4ccb53383cbb0647d10e34b62d68bfc859cc6bcf9"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [""debian@sha256:677f184a5969847c0ad91d30cf1f0b925cd321e6c66e3ed5fbf9858f58425d1a""], ""RepoTags"": [""debian:latest""], ""RootFS"": {""Layers"": [""sha256:142a601d97936307e75220c35dde0348971a9584c21e7cb42e1f7004005432ab""], ""Type"": ""layers""}, ""Size"": 122988258, ""VirtualSize"": 122988258}], ""invocation"": {""module_args"": {""api_version"": null, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""docker_host"": null, ""filter_logger"": false, ""key_path"": null, ""name"": [""debian:not-latest""], ""ssl_version"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null}, ""module_name"": ""docker_image_facts""}} PLAY RECAP ********************************************************************* localhost : ok=3 changed=0 unreachable=0 failed=0 ``` ",True,"docker_image_facts does not search for tag - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_image_facts ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 14.04.5/Debian Jessie Docker Version: ``` Client: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: Tue Oct 11 17:43:41 2016 OS/Arch: linux/amd64 Server: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: Tue Oct 11 17:43:41 2016 OS/Arch: linux/amd64 ``` ##### SUMMARY docker_image_facts retuirns all named images regardless of tag, even when specifying a tag in the name attribute of the module. ##### STEPS TO REPRODUCE Specify tag in docker-image_facts name attribute using `:` Latest is returned in the image array, along with any other debian image in the local registry. ``` - hosts: localhost connection: local tasks: - docker_image_facts: name=""debian:latest"" - docker_image_facts: name=""debian:not-latest"" ``` ##### EXPECTED RESULTS The latest image not being returned in the second task. ##### ACTUAL RESULTS ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [docker_image_facts] ****************************************************** PLAYBOOK: docker-test.yml ****************************************************** 1 plays in docker-test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466 `"" && echo ansible-tmp-1476812867.36-239285172461466=""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpLHbQfE TO /root/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466/setup <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466/setup; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476812867.36-239285172461466/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [docker_image_facts] ****************************************************** task path: /root/docker-test.yml:4 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847 `"" && echo ansible-tmp-1476812867.85-152102133665847=""` echo $HOME/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpvxleDe TO /root/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847/docker_image_facts <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847/docker_image_facts; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476812867.85-152102133665847/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""images"": [{""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/bash""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Container"": ""27a6f130fb20b127801c1179ff49cc6adcb252c6d91f5955421e4c66a72ced31"", ""ContainerConfig"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/sh"", ""-c"", ""#(nop) "", ""CMD [\""/bin/bash\""]""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-09-23T18:08:51.133779867Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": {""Data"": {""DeviceId"": ""2"", ""DeviceName"": ""docker-202:2-403255-60c0ce068752b2eacce3a5c0985bd008bb4641443697174f5eab997d09c81ef6"", ""DeviceSize"": ""10737418240""}, ""Name"": ""devicemapper""}, ""Id"": ""sha256:ddf73f48a05d97e4f473d0b4ccb53383cbb0647d10e34b62d68bfc859cc6bcf9"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [""debian@sha256:677f184a5969847c0ad91d30cf1f0b925cd321e6c66e3ed5fbf9858f58425d1a""], ""RepoTags"": [""debian:latest""], ""RootFS"": {""Layers"": [""sha256:142a601d97936307e75220c35dde0348971a9584c21e7cb42e1f7004005432ab""], ""Type"": ""layers""}, ""Size"": 122988258, ""VirtualSize"": 122988258}], ""invocation"": {""module_args"": {""api_version"": null, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""docker_host"": null, ""filter_logger"": false, ""key_path"": null, ""name"": [""debian:latest""], ""ssl_version"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null}, ""module_name"": ""docker_image_facts""}} TASK [docker_image_facts] ****************************************************** task path: /root/docker-test.yml:6 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637 `"" && echo ansible-tmp-1476812868.02-253123017828637=""` echo $HOME/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmp0oNjN1 TO /root/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637/docker_image_facts <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637/docker_image_facts; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476812868.02-253123017828637/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""images"": [{""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/bash""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Container"": ""27a6f130fb20b127801c1179ff49cc6adcb252c6d91f5955421e4c66a72ced31"", ""ContainerConfig"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""/bin/sh"", ""-c"", ""#(nop) "", ""CMD [\""/bin/bash\""]""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""383850eeb47b"", ""Image"": ""sha256:3681da375b325973f1297e0f64a2feef4fcae77715d54c34aee2188431ed6f46"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-09-23T18:08:51.133779867Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": {""Data"": {""DeviceId"": ""2"", ""DeviceName"": ""docker-202:2-403255-60c0ce068752b2eacce3a5c0985bd008bb4641443697174f5eab997d09c81ef6"", ""DeviceSize"": ""10737418240""}, ""Name"": ""devicemapper""}, ""Id"": ""sha256:ddf73f48a05d97e4f473d0b4ccb53383cbb0647d10e34b62d68bfc859cc6bcf9"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [""debian@sha256:677f184a5969847c0ad91d30cf1f0b925cd321e6c66e3ed5fbf9858f58425d1a""], ""RepoTags"": [""debian:latest""], ""RootFS"": {""Layers"": [""sha256:142a601d97936307e75220c35dde0348971a9584c21e7cb42e1f7004005432ab""], ""Type"": ""layers""}, ""Size"": 122988258, ""VirtualSize"": 122988258}], ""invocation"": {""module_args"": {""api_version"": null, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""docker_host"": null, ""filter_logger"": false, ""key_path"": null, ""name"": [""debian:not-latest""], ""ssl_version"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null}, ""module_name"": ""docker_image_facts""}} PLAY RECAP ********************************************************************* localhost : ok=3 changed=0 unreachable=0 failed=0 ``` ",1,docker image facts does not search for tag please do not report issues requests related to ansible modules here report them to the appropriate modules core or modules extras project also verify first that your issue request is not already reported in github issue type bug report component name docker image facts ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu debian jessie docker version client version api version go version git commit built tue oct os arch linux server version api version go version git commit built tue oct os arch linux summary docker image facts retuirns all named images regardless of tag even when specifying a tag in the name attribute of the module steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used specify tag in docker image facts name attribute using latest is returned in the image array along with any other debian image in the local registry hosts localhost connection local tasks docker image facts name debian latest docker image facts name debian not latest expected results the latest image not being returned in the second task actual results play task ok task playbook docker test yml plays in docker test yml play task establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmplhbqfe to root ansible tmp ansible tmp setup exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp setup rm rf root ansible tmp ansible tmp dev null sleep ok task task path root docker test yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpvxlede to root ansible tmp ansible tmp docker image facts exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp docker image facts rm rf root ansible tmp ansible tmp dev null sleep ok changed false images domainname entrypoint null env hostname image labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir container containerconfig attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env hostname image labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir created dockerversion graphdriver data deviceid devicename docker devicesize name devicemapper id os linux parent repodigests repotags rootfs layers type layers size virtualsize invocation module args api version null cacert path null cert path null debug false docker host null filter logger false key path null name ssl version null timeout null tls null tls hostname null tls verify null module name docker image facts task task path root docker test yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp docker image facts exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp docker image facts rm rf root ansible tmp ansible tmp dev null sleep ok changed false images domainname entrypoint null env hostname image labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir container containerconfig attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env hostname image labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir created dockerversion graphdriver data deviceid devicename docker devicesize name devicemapper id os linux parent repodigests repotags rootfs layers type layers size virtualsize invocation module args api version null cacert path null cert path null debug false docker host null filter logger false key path null name ssl version null timeout null tls null tls hostname null tls verify null module name docker image facts play recap localhost ok changed unreachable failed ,1 1773,6575799351.0,IssuesEvent,2017-09-11 17:22:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure_rm_networkinterface cannot create interface if virtual network is in different resource group,affects_2.1 azure bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_networkinterface module ##### ANSIBLE VERSION ansible 2.1.2.0 config file = /home/amz/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Cannot create network interface to connect to virtual network in other resource group. ##### STEPS TO REPRODUCE Create task: ``` - name: Create a network interface with minimal parameters azure_rm_networkinterface: name: ""interfaceName"" resource_group: ""ansibleManaged"" virtual_network_name: ""sharedNetwork"" subnet_name: default location: ""northeurope"" ``` ##### EXPECTED RESULTS Create network interface. Virtual Network in ARM are restricted to region no to resource group. It should be possible to access Virtual Network in same region. ##### ACTUAL RESULTS Interface creation task faile because it can't find virtual network in ``` TASK [azure : Create a network interface with minimal parameters] ************** task path: /home/amz/ansible/roles/azure/tasks/create_azure.yml:2 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: amz <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441 `"" && echo ansible-tmp-1475747675.66-52569430125441=""` echo $HOME/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpguR_EI TO /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/azure_rm_networkinterface <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/ /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/azure_rm_networkinterface && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 python /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/azure_rm_networkinterface; rm -rf ""/home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""ad_user"": null, ""append_tags"": true, ""client_id"": null, ""location"": ""northeurope"", ""name"": ""interfaceName"", ""open_ports"": null, ""os_type"": ""Linux"", ""password"": null, ""private_ip_address"": null, ""private_ip_allocation_method"": ""Dynamic"", ""profile"": null, ""public_ip"": true, ""public_ip_address_name"": null, ""public_ip_allocation_method"": ""Dynamic"", ""resource_group"": ""ansibleManaged"", ""secret"": null, ""security_group_name"": null, ""state"": ""present"", ""subnet_name"": ""default"", ""subscription_id"": null, ""tags"": null, ""tenant"": null, ""virtual_network_name"": ""sharedNetwork""}, ""module_name"": ""azure_rm_networkinterface""}, ""msg"": ""Error: fetching subnet default in virtual network sharedNetwork - The Resource 'Microsoft.Network/virtualNetworks/sharedNetwork' under resource group 'ansibleManaged' was not found.""} ``` ",True,"azure_rm_networkinterface cannot create interface if virtual network is in different resource group - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_networkinterface module ##### ANSIBLE VERSION ansible 2.1.2.0 config file = /home/amz/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Cannot create network interface to connect to virtual network in other resource group. ##### STEPS TO REPRODUCE Create task: ``` - name: Create a network interface with minimal parameters azure_rm_networkinterface: name: ""interfaceName"" resource_group: ""ansibleManaged"" virtual_network_name: ""sharedNetwork"" subnet_name: default location: ""northeurope"" ``` ##### EXPECTED RESULTS Create network interface. Virtual Network in ARM are restricted to region no to resource group. It should be possible to access Virtual Network in same region. ##### ACTUAL RESULTS Interface creation task faile because it can't find virtual network in ``` TASK [azure : Create a network interface with minimal parameters] ************** task path: /home/amz/ansible/roles/azure/tasks/create_azure.yml:2 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: amz <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441 `"" && echo ansible-tmp-1475747675.66-52569430125441=""` echo $HOME/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpguR_EI TO /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/azure_rm_networkinterface <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/ /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/azure_rm_networkinterface && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=pl_PL.UTF-8 LC_ALL=pl_PL.UTF-8 LC_MESSAGES=pl_PL.UTF-8 python /home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/azure_rm_networkinterface; rm -rf ""/home/amz/.ansible/tmp/ansible-tmp-1475747675.66-52569430125441/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""ad_user"": null, ""append_tags"": true, ""client_id"": null, ""location"": ""northeurope"", ""name"": ""interfaceName"", ""open_ports"": null, ""os_type"": ""Linux"", ""password"": null, ""private_ip_address"": null, ""private_ip_allocation_method"": ""Dynamic"", ""profile"": null, ""public_ip"": true, ""public_ip_address_name"": null, ""public_ip_allocation_method"": ""Dynamic"", ""resource_group"": ""ansibleManaged"", ""secret"": null, ""security_group_name"": null, ""state"": ""present"", ""subnet_name"": ""default"", ""subscription_id"": null, ""tags"": null, ""tenant"": null, ""virtual_network_name"": ""sharedNetwork""}, ""module_name"": ""azure_rm_networkinterface""}, ""msg"": ""Error: fetching subnet default in virtual network sharedNetwork - The Resource 'Microsoft.Network/virtualNetworks/sharedNetwork' under resource group 'ansibleManaged' was not found.""} ``` ",1,azure rm networkinterface cannot create interface if virtual network is in different resource group issue type bug report component name azure rm networkinterface module ansible version ansible config file home amz ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary cannot create network interface to connect to virtual network in other resource group steps to reproduce create task name create a network interface with minimal parameters azure rm networkinterface name interfacename resource group ansiblemanaged virtual network name sharednetwork subnet name default location northeurope expected results create network interface virtual network in arm are restricted to region no to resource group it should be possible to access virtual network in same region actual results interface creation task faile because it can t find virtual network in task task path home amz ansible roles azure tasks create azure yml establish local connection for user amz exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpgur ei to home amz ansible tmp ansible tmp azure rm networkinterface exec bin sh c chmod u x home amz ansible tmp ansible tmp home amz ansible tmp ansible tmp azure rm networkinterface sleep exec bin sh c lang pl pl utf lc all pl pl utf lc messages pl pl utf python home amz ansible tmp ansible tmp azure rm networkinterface rm rf home amz ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args ad user null append tags true client id null location northeurope name interfacename open ports null os type linux password null private ip address null private ip allocation method dynamic profile null public ip true public ip address name null public ip allocation method dynamic resource group ansiblemanaged secret null security group name null state present subnet name default subscription id null tags null tenant null virtual network name sharednetwork module name azure rm networkinterface msg error fetching subnet default in virtual network sharednetwork the resource microsoft network virtualnetworks sharednetwork under resource group ansiblemanaged was not found ,1 1737,6574875811.0,IssuesEvent,2017-09-11 14:21:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"gce module ignores specified service account, always uses project default",affects_2.1 bug_report cloud gce waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME gce ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/etlctl/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ~/.ansible.cfg: ``` [defaults] inventory = $HOME/.ansible.hosts host_key_checking = False display_skipped_hosts = False display_args_to_stdout = False nocows = 1 retry_files_enabled = False module_compression = 'ZIP_STORED' [ssh_connection] pipelining = True scp_if_ssh = True ``` ~/.ansible.hosts: ``` localhost ``` ##### OS / ENVIRONMENT `ubuntu-14.04` image running on existing GCE node, with `libcloud-1.3.0` installed via pip (note: this also requires `backports.ssl-match-hostname` to function, an undeclared dependency) ##### SUMMARY Ansible's ""gce"" module _always_ creates the spawned machine with the default service account for our project. The specified service account in `service_account_email` is NOT assigned to the new node. Note that `service_account_permissions` _does_ seem to be used because originally we tried to run an app on a created node and it didn't have the needed permissions; changing the `service_account_permissions` to include the ones required (in our case bigquery) did work. ##### STEPS TO REPRODUCE 1. create a service account that has all possible assigned roles (so permissions aren't an issue) 2. manually create a GCE instance to act as Ansible controller using above service account 3. verify that the instance has the correct service account in web UI 4. install `ansible` on the instance via apt repository (see version above) 5. install `libcloud` via `libpip` (see version above) 6. run ansible gce module play (see below) here's the task running on the GCE-based Ansible controller that's making the instance: ``` - name: create_gce_instance gce: name: '{{instance_name}}' zone: '{{gce_compute_zone}}' image: '{{worker_machine_image}}' machine_type: '{{worker_machine_type}}' project_id: '{{gce_project}}' credentials_file: '{{ctlr_service_creds}}' service_account_email: '{{ctlr_service_acct}}' service_account_permissions: - bigquery - cloud-platform - compute-ro - useraccounts-ro - logging-write - storage-rw # inject metadata from extra args metadata: source_path: '{{source_path}}' pubsub_id: '{{pubsub_id}}' register: etlnode ``` This is running with `hosts: localhost`, `connection: local`, `gather_facts: no`, and `become: False` Note that the `credentials_file` contains a JSON format keyfile downloaded in the web UI for the same service account specified in `service_account_email`. I assume that the `credentials_file` gives the credentials used to create the machine, while the `service_account_email` contains the desired service account the created machine should get (since `service_account_permissions` does that for access scopes, i.e. specifies what you want the new instance to have). We're using the same one in both just for testing because it has every permission, but eventually of course want to use a reduced permission set service account for the spawned worker. ##### EXPECTED RESULTS created instance will have the service account specified in `service_account_email` ##### ACTUAL RESULTS created node _always_ has our project's default service account, despite `service_account_email` being used in the play ",True,"gce module ignores specified service account, always uses project default - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME gce ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/etlctl/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ~/.ansible.cfg: ``` [defaults] inventory = $HOME/.ansible.hosts host_key_checking = False display_skipped_hosts = False display_args_to_stdout = False nocows = 1 retry_files_enabled = False module_compression = 'ZIP_STORED' [ssh_connection] pipelining = True scp_if_ssh = True ``` ~/.ansible.hosts: ``` localhost ``` ##### OS / ENVIRONMENT `ubuntu-14.04` image running on existing GCE node, with `libcloud-1.3.0` installed via pip (note: this also requires `backports.ssl-match-hostname` to function, an undeclared dependency) ##### SUMMARY Ansible's ""gce"" module _always_ creates the spawned machine with the default service account for our project. The specified service account in `service_account_email` is NOT assigned to the new node. Note that `service_account_permissions` _does_ seem to be used because originally we tried to run an app on a created node and it didn't have the needed permissions; changing the `service_account_permissions` to include the ones required (in our case bigquery) did work. ##### STEPS TO REPRODUCE 1. create a service account that has all possible assigned roles (so permissions aren't an issue) 2. manually create a GCE instance to act as Ansible controller using above service account 3. verify that the instance has the correct service account in web UI 4. install `ansible` on the instance via apt repository (see version above) 5. install `libcloud` via `libpip` (see version above) 6. run ansible gce module play (see below) here's the task running on the GCE-based Ansible controller that's making the instance: ``` - name: create_gce_instance gce: name: '{{instance_name}}' zone: '{{gce_compute_zone}}' image: '{{worker_machine_image}}' machine_type: '{{worker_machine_type}}' project_id: '{{gce_project}}' credentials_file: '{{ctlr_service_creds}}' service_account_email: '{{ctlr_service_acct}}' service_account_permissions: - bigquery - cloud-platform - compute-ro - useraccounts-ro - logging-write - storage-rw # inject metadata from extra args metadata: source_path: '{{source_path}}' pubsub_id: '{{pubsub_id}}' register: etlnode ``` This is running with `hosts: localhost`, `connection: local`, `gather_facts: no`, and `become: False` Note that the `credentials_file` contains a JSON format keyfile downloaded in the web UI for the same service account specified in `service_account_email`. I assume that the `credentials_file` gives the credentials used to create the machine, while the `service_account_email` contains the desired service account the created machine should get (since `service_account_permissions` does that for access scopes, i.e. specifies what you want the new instance to have). We're using the same one in both just for testing because it has every permission, but eventually of course want to use a reduced permission set service account for the spawned worker. ##### EXPECTED RESULTS created instance will have the service account specified in `service_account_email` ##### ACTUAL RESULTS created node _always_ has our project's default service account, despite `service_account_email` being used in the play ",1,gce module ignores specified service account always uses project default issue type bug report component name gce ansible version ansible config file home etlctl ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible cfg inventory home ansible hosts host key checking false display skipped hosts false display args to stdout false nocows retry files enabled false module compression zip stored pipelining true scp if ssh true ansible hosts localhost os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu image running on existing gce node with libcloud installed via pip note this also requires backports ssl match hostname to function an undeclared dependency summary ansible s gce module always creates the spawned machine with the default service account for our project the specified service account in service account email is not assigned to the new node note that service account permissions does seem to be used because originally we tried to run an app on a created node and it didn t have the needed permissions changing the service account permissions to include the ones required in our case bigquery did work steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create a service account that has all possible assigned roles so permissions aren t an issue manually create a gce instance to act as ansible controller using above service account verify that the instance has the correct service account in web ui install ansible on the instance via apt repository see version above install libcloud via libpip see version above run ansible gce module play see below here s the task running on the gce based ansible controller that s making the instance name create gce instance gce name instance name zone gce compute zone image worker machine image machine type worker machine type project id gce project credentials file ctlr service creds service account email ctlr service acct service account permissions bigquery cloud platform compute ro useraccounts ro logging write storage rw inject metadata from extra args metadata source path source path pubsub id pubsub id register etlnode this is running with hosts localhost connection local gather facts no and become false note that the credentials file contains a json format keyfile downloaded in the web ui for the same service account specified in service account email i assume that the credentials file gives the credentials used to create the machine while the service account email contains the desired service account the created machine should get since service account permissions does that for access scopes i e specifies what you want the new instance to have we re using the same one in both just for testing because it has every permission but eventually of course want to use a reduced permission set service account for the spawned worker expected results created instance will have the service account specified in service account email actual results created node always has our project s default service account despite service account email being used in the play ,1 1059,4875626438.0,IssuesEvent,2016-11-16 10:10:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ansible 2.2.0 Unarchive sudden unexpected results,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/####/Documents/workspaces/######/devops-infrastructure/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I've been using this script and all since Ansible 2.1, I moved onto 2.2 rc1 2 weeks ago and this has been working well for setting up new environments. Suddenly today it Unzip's into 2 random folders and Unarchive does not extract to /var/www/ anymore like it used to. It now creates a folder called Craft... whatever the extension is instead of extracting the content of that zip into this directory. It also creates a folder called __MACOSX which contains the original zip and it leaves both of those ugly folders in the directory. Then my next command completely breaks and the whole rest of the playbook that used to work breaks. I'm not entirely sure what changed over the weekend. ##### STEPS TO REPRODUCE ``` - hosts: test remote_user: root tasks: # Download and Unzip CraftCMS - name: download and unzip craft unarchive: src=https://craftcms.com/latest.zip?accept_license=yes dest=/var/www/ remote_src=yes group=www-data owner=www-data creates=/var/www/craft validate_certs=no # Move public/ to html/ - name: rename public/ to html/ shell: mv /var/www/public/* /var/www/html/ && mv /var/www/html/htaccess /var/www/html/.htaccess args: creates: /var/www/html/robots.txt ``` ##### EXPECTED RESULTS What has been happening and that I expect to happen is it downloads the .zip for CraftCMS, it extracts the content of that zip file into /var/www and sets the owner and group of the files to www-data. So I expect to see a /craft and a /public directory inside of /var/www/ after the first task runs. ##### ACTUAL RESULTS ``` <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826 `"" && echo ansible-tmp-1477923506.24-280511070608826=""` echo $HOME/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826 `"" ) && sleep 0'""'""'' <139.59.162.126> PUT /var/folders/7w/bgj8hbf50vdd_4m42s91kkjc0000gq/T/tmpiLC579 TO /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/stat.py <139.59.162.126> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r '[139.59.162.126]' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'chmod u+x /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/ /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/stat.py && sleep 0'""'""'' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r -tt 139.59.162.126 '/bin/sh -c '""'""'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/stat.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/"" > /dev/null 2>&1 && sleep 0'""'""'' Using module file /Library/Python/2.7/site-packages/ansible-2.2.0.0-py2.7.egg/ansible/modules/core/files/unarchive.py <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460 `"" && echo ansible-tmp-1477923509.7-23354923558460=""` echo $HOME/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460 `"" ) && sleep 0'""'""'' <139.59.162.126> PUT /var/folders/7w/bgj8hbf50vdd_4m42s91kkjc0000gq/T/tmpiNyTln TO /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/unarchive.py <139.59.162.126> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r '[139.59.162.126]' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'chmod u+x /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/ /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/unarchive.py && sleep 0'""'""'' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r -tt 139.59.162.126 '/bin/sh -c '""'""'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/unarchive.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/"" > /dev/null 2>&1 && sleep 0'""'""'' --- a whole lot happens here that just clears the whole terminal almost.. ...2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/ClosureCommand.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._ClosureCommand.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/CommandInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._CommandInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/CreateResponseClassEvent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._CreateResponseClassEvent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/DefaultRequestSerializer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._DefaultRequestSerializer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/DefaultResponseParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._DefaultResponseParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/AliasFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._AliasFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/CompositeFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._CompositeFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/ConcreteClassFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._ConcreteClassFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/FactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._FactoryInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/MapFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._MapFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/ServiceDescriptionFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._ServiceDescriptionFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._Factory \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/AbstractRequestVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._AbstractRequestVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/BodyVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._BodyVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/HeaderVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._HeaderVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/JsonVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._JsonVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/PostFieldVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._PostFieldVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/PostFileVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._PostFileVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/QueryVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._QueryVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/RequestVisitorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._RequestVisitorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/ResponseBodyVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._ResponseBodyVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/._Request \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/AbstractResponseVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._AbstractResponseVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/BodyVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._BodyVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/HeaderVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._HeaderVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/JsonVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._JsonVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/ReasonPhraseVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._ReasonPhraseVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/ResponseVisitorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._ResponseVisitorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/StatusCodeVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._StatusCodeVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/._Response \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/VisitorFlyweight.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/._VisitorFlyweight.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._LocationVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/OperationCommand.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._OperationCommand.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/OperationResponseParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._OperationResponseParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/RequestSerializerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._RequestSerializerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/ResponseClassInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._ResponseClassInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/ResponseParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._ResponseParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Command \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/ConfigLoaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._ConfigLoaderInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/Operation.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._Operation.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/OperationInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._OperationInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/Parameter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._Parameter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/SchemaFormatter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._SchemaFormatter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/SchemaValidator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._SchemaValidator.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ServiceDescription.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ServiceDescription.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ServiceDescriptionInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ServiceDescriptionInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ServiceDescriptionLoader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ServiceDescriptionLoader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ValidatorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ValidatorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Description \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/CommandException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._CommandException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/CommandTransferException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._CommandTransferException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/DescriptionBuilderException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._DescriptionBuilderException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/InconsistentClientTransferException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._InconsistentClientTransferException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ResponseClassException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ResponseClassException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ServiceBuilderException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ServiceBuilderException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ServiceNotFoundException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ServiceNotFoundException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ValidationException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ValidationException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Exception \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/AbstractResourceIteratorFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._AbstractResourceIteratorFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/CompositeResourceIteratorFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._CompositeResourceIteratorFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/MapResourceIteratorFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._MapResourceIteratorFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/Model.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._Model.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIterator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIterator.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorApplyBatched.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorApplyBatched.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorClassFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorClassFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorFactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorFactoryInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Resource \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/._Service \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/PhpStreamRequestFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._PhpStreamRequestFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/Stream.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._Stream.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/StreamInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._StreamInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/StreamRequestFactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._StreamRequestFactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/._Stream \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/._Guzzle \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/._src \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/UPGRADING.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/._UPGRADING.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/._guzzle \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._guzzle \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/CHANGELOG.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._CHANGELOG.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Draw/DrawerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Draw/._DrawerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Draw \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Effects/EffectsInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Effects/._EffectsInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Effects \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/Exception.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._Exception.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/InvalidArgumentException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._InvalidArgumentException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/NotSupportedException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._NotSupportedException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/OutOfBoundsException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._OutOfBoundsException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/RuntimeException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._RuntimeException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Exception \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/Border.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._Border.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/Canvas.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._Canvas.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/Grayscale.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._Grayscale.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/OnPixelBased.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._OnPixelBased.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/RelativeResize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._RelativeResize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._Advanced \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/ApplyMask.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._ApplyMask.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Autorotate.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Autorotate.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Copy.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Copy.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Crop.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Crop.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Fill.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Fill.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/FlipHorizontally.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._FlipHorizontally.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/FlipVertically.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._FlipVertically.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Paste.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Paste.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Resize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Resize.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Rotate.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Rotate.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Save.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Save.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Show.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Show.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Strip.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Strip.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Thumbnail.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Thumbnail.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/WebOptimization.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._WebOptimization.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._Basic \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/FilterInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._FilterInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/ImagineAware.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._ImagineAware.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Transformation.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._Transformation.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Drawer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Drawer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Effects.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Effects.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Font.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Font.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Image.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Image.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Imagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Imagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Gd \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Drawer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Drawer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Effects.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Effects.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Font.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Font.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Image.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Image.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Imagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Imagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Gmagick \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractFont.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractFont.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractImage.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractImage.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractImagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractImagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractLayers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractLayers.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Box.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Box.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/BoxInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._BoxInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/FillInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/._FillInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/Horizontal.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/._Horizontal.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/Linear.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/._Linear.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/Vertical.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/._Vertical.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/._Gradient \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Fill \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/FontInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._FontInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/Bucket.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/._Bucket.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/Range.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/._Range.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Histogram \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ImageInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ImageInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ImagineInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ImagineInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/LayersInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._LayersInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ManipulatorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ManipulatorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/AbstractMetadataReader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._AbstractMetadataReader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/DefaultMetadataReader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._DefaultMetadataReader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/ExifMetadataReader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._ExifMetadataReader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/MetadataBag.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._MetadataBag.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/MetadataReaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._MetadataReaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Metadata \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/CMYK.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._CMYK.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/CMYK.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._CMYK.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/ColorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._ColorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/Gray.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._Gray.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._Color \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/ColorParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._ColorParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Grayscale.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._Grayscale.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/PaletteInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._PaletteInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Palette \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Point/Center.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Point/._Center.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Point \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Point.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Point.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/PointInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._PointInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Profile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Profile.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ProfileInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ProfileInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Image \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Drawer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Drawer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Effects.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Effects.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Font.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Font.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Image.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Image.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Imagick.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Imagick.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Imagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Imagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Imagick \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/Adobe/CMYK/USWebUncoated.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/Adobe/CMYK/._USWebUncoated.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/Adobe/._CMYK \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/._Adobe \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/color.org/sRGB_IEC61966-2-1_black_scaled.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/color.org/._sRGB_IEC61966-2-1_black_scaled.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/._color.org \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/colormanagement.org/ISOcoated_v2_grey1c_bas.ICC \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/colormanagement.org/._ISOcoated_v2_grey1c_bas.ICC \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/._colormanagement.org \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._resources \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/._Imagine \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._lib \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/._imagine \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._imagine \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/AUTHORS \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._AUTHORS \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/autoload.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._autoload.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/COPYING \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._COPYING \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/INSTALL \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._INSTALL \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/NEWS \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._NEWS \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/da.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._da.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/de.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._de.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/es.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._es.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/fr.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._fr.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/ja.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._ja.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/nl.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._nl.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/pel.pot \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._pel.pot \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/pl.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._pl.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/README \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._README \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/README.markdown \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._README.markdown \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/Pel.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._Pel.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelConvert.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelConvert.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelDataWindow.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelDataWindow.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelDataWindowOffsetException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelDataWindowOffsetException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelDataWindowWindowException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelDataWindowWindowException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntry.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntry.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryAscii.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryAscii.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryByte.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryByte.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryCopyright.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryCopyright.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryLong.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryLong.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryNumber.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryNumber.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryRational.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryRational.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySByte.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySByte.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryShort.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryShort.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySLong.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySLong.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySRational.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySRational.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySShort.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySShort.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryTime.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryTime.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryUndefined.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryUndefined.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryUserComment.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryUserComment.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryVersion.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryVersion.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryWindowsString.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryWindowsString.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelExif.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelExif.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelFormat.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelFormat.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelIfd.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelIfd.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelIfdException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelIfdException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelInvalidArgumentException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelInvalidArgumentException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelInvalidDataException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelInvalidDataException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpeg.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpeg.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegComment.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegComment.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegContent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegContent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegInvalidMarkerException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegInvalidMarkerException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegMarker.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegMarker.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelOverflowException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelOverflowException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelTag.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelTag.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelTiff.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelTiff.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelUnexpectedFormatException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelUnexpectedFormatException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelWrongComponentCountException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelWrongComponentCountException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._src \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/._pel \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._lsolesen \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/gnu-lgpl.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._gnu-lgpl.txt \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/pclzip.lib.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._pclzip.lib.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/readme.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._readme.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/._pclzip \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._pclzip \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/changelog.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._changelog.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.phpmailer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.phpmailer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.phpmaileroauth.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.phpmaileroauth.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.phpmaileroauthgoogle.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.phpmaileroauthgoogle.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.pop3.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.pop3.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.smtp.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.smtp.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/composer.lock \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._composer.lock \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/EasyPeasyICS.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._EasyPeasyICS.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/htmlfilter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._htmlfilter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/ntlm_sasl_client.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._ntlm_sasl_client.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/README.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._README.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._extras \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/get_oauth_token.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._get_oauth_token.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-am.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-am.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ar.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ar.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-az.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-az.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-be.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-be.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-bg.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-bg.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-br.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-br.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ca.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ca.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ch.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ch.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-cz.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-cz.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-de.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-de.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-dk.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-dk.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-el.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-el.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-eo.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-eo.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-es.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-es.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-et.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-et.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fa.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fa.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fi.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fi.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fo.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fo.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-gl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-gl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-he.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-he.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-hr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-hr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-hu.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-hu.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-id.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-id.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-it.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-it.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ja.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ja.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ka.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ka.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ko.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ko.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-lt.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-lt.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-lv.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-lv.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ms.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ms.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-nl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-nl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-no.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-no.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-pl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-pl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-pt.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-pt.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ro.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ro.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ru.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ru.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-se.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-se.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-sk.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-sk.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-sl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-sl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-sr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-sr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-tr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-tr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-uk.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-uk.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-vi.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-vi.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-zh.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-zh.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-zh_cn.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-zh_cn.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._language \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._LICENSE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/PHPMailerAutoload.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._PHPMailerAutoload.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/VERSION \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._VERSION \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/._phpmailer \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._phpmailer \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/autoloader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._autoloader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/idna_convert.class.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._idna_convert.class.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/LICENCE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._LICENCE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/npdata.ser \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._npdata.ser \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/ReadMe.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._ReadMe.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._idn \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Author.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Author.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/Base.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._Base.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/DB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._DB.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/File.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._File.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/Memcache.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._Memcache.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/MySQL.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._MySQL.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Cache \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Cache.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Caption.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Caption.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Category.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Category.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Content/Type/Sniffer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Content/Type/._Sniffer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Content/._Type \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Content \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Copyright.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Copyright.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Core.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Core.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Credit.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Credit.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Decode/HTML/Entities.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Decode/HTML/._Entities.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Decode/._HTML \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Decode \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Enclosure.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Enclosure.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Exception.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Exception.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/File.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._File.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/gzdecode.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._gzdecode.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/HTTP/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/HTTP/._Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._HTTP \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/IRI.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._IRI.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Item.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Item.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Locator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Locator.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Misc.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Misc.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Net/IPv6.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Net/._IPv6.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Net \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Parse/Date.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Parse/._Date.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Parse \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Parser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Rating.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Rating.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Registry.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Registry.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Restriction.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Restriction.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Sanitize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Sanitize.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Source.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Source.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/XML/Declaration/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/XML/Declaration/._Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/XML/._Declaration \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._XML \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/._SimplePie \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/._SimplePie.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._library \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/LICENSE.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._LICENSE.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/._simplepie \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._simplepie \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/CHANGELOG.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._CHANGELOG.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/ContainerAwareEventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._ContainerAwareEventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/TraceableEventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/._TraceableEventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/TraceableEventDispatcherInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/._TraceableEventDispatcherInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/WrappedListener.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/._WrappedListener.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._Debug \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/DependencyInjection/RegisterListenersPass.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/DependencyInjection/._RegisterListenersPass.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._DependencyInjection \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Event.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._Event.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/EventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._EventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/EventDispatcherInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._EventDispatcherInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/EventSubscriberInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._EventSubscriberInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/GenericEvent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._GenericEvent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/ImmutableEventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._ImmutableEventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._LICENSE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/phpunit.xml.dist \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._phpunit.xml.dist \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/README.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._README.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/AbstractEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._AbstractEventDispatcherTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/ContainerAwareEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._ContainerAwareEventDispatcherTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/Debug/TraceableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/Debug/._TraceableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._Debug \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/DependencyInjection/RegisterListenersPassTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/DependencyInjection/._RegisterListenersPassTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._DependencyInjection \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/EventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._EventDispatcherTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/EventTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._EventTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/GenericEventTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._GenericEventTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/ImmutableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._ImmutableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._Tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/._event-dispatcher \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._symfony \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/CHANGELOG \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._CHANGELOG \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/advanced.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._advanced.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/advanced_legacy.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._advanced_legacy.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/api.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._api.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/coding_standards.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._coding_standards.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/deprecated.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._deprecated.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/abs.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._abs.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/batch.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._batch.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/capitalize.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._capitalize.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/convert_encoding.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._convert_encoding.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/date.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._date.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/date_modify.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._date_modify.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/default.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._default.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/escape.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._escape.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/first.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._first.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/format.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._format.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/join.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._join.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/json_encode.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._json_encode.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/keys.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._keys.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/last.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._last.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/length.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._length.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/lower.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._lower.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/merge.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._merge.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/nl2br.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._nl2br.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/number_format.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._number_format.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/raw.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._raw.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/replace.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._replace.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/reverse.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._reverse.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/round.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._round.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/slice.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._slice.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/sort.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._sort.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/split.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._split.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/striptags.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._striptags.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/title.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._title.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/trim.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._trim.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/upper.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._upper.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/url_encode.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._url_encode.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._filters \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/attribute.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._attribute.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/block.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._block.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/constant.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._constant.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/cycle.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._cycle.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/date.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._date.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/dump.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._dump.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/include.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._include.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/max.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._max.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/min.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._min.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/parent.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._parent.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/random.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._random.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/range.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._range.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/source.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._source.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/template_from_string.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._template_from_string.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._functions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/installation.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._installation.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/internals.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._internals.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/intro.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._intro.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/recipes.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._recipes.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/autoescape.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._autoescape.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/block.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._block.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/do.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._do.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/embed.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._embed.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/extends.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._extends.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/filter.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._filter.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/flush.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._flush.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/for.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._for.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/from.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._from.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/if.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._if.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/import.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._import.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/include.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._include.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/macro.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._macro.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/sandbox.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._sandbox.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/set.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._set.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/spaceless.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._spaceless.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/use.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._use.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/verbatim.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._verbatim.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._tags \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/templates.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._templates.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/constant.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._constant.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/defined.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._defined.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/divisibleby.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._divisibleby.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/empty.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._empty.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/even.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._even.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/iterable.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._iterable.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/null.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._null.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/odd.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._odd.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/sameas.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._sameas.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._doc \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/config.m4 \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._config.m4 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/config.w32 \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._config.w32 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/php_twig.h \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._php_twig.h \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/twig.c \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._twig.c \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/._twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._ext \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Autoloader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Autoloader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/BaseNodeVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._BaseNodeVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/Filesystem.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/._Filesystem.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/Null.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/._Null.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Cache \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/CacheInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._CacheInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Compiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Compiler.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/CompilerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._CompilerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Environment.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Environment.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/Loader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/._Loader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/Runtime.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/._Runtime.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/Syntax.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/._Syntax.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Error \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Error.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ExistsLoaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ExistsLoaderInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ExpressionParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ExpressionParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Core.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Core.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Debug.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Debug.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Escaper.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Escaper.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/GlobalsInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._GlobalsInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/InitRuntimeInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._InitRuntimeInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Optimizer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Optimizer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Profiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Profiler.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Sandbox.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Staging.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Staging.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/StringLoader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._StringLoader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Extension \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Extension.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ExtensionInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ExtensionInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FileExtensionEscapingStrategy.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FileExtensionEscapingStrategy.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/Method.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/._Method.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/._Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Filter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FilterCallableInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FilterCallableInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FilterInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FilterInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/Method.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/._Method.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/._Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Function \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FunctionCallableInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FunctionCallableInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FunctionInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FunctionInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Lexer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Lexer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/LexerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._LexerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/Array.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._Array.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/Chain.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._Chain.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/Filesystem.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._Filesystem.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/String.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._String.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Loader \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/LoaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._LoaderInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Markup.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Markup.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/AutoEscape.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._AutoEscape.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Block.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Block.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/BlockReference.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._BlockReference.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Body.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Body.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/CheckSecurity.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._CheckSecurity.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Do.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Do.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Embed.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Embed.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Array.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Array.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/AssignName.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._AssignName.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Add.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Add.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/And.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._And.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/BitwiseAnd.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._BitwiseAnd.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/BitwiseOr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._BitwiseOr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/BitwiseXor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._BitwiseXor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Concat.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Concat.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Div.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Div.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/EndsWith.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._EndsWith.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Equal.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Equal.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/FloorDiv.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._FloorDiv.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Greater.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Greater.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/GreaterEqual.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._GreaterEqual.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/In.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._In.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Less.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Less.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/LessEqual.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._LessEqual.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Matches.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Matches.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Mod.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Mod.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Mul.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Mul.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/NotEqual.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._NotEqual.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/NotIn.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._NotIn.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Or.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Or.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Power.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Power.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Range.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Range.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/StartsWith.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._StartsWith.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Sub.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Sub.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Binary \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Binary.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/BlockReference.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._BlockReference.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Call.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Call.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Conditional.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Conditional.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Constant.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Constant.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/ExtensionReference.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._ExtensionReference.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Filter/Default.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Filter/._Default.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Filter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Filter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/GetAttr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._GetAttr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/MethodCall.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._MethodCall.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Name.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Name.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/NullCoalesce.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._NullCoalesce.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Parent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Parent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/TempName.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._TempName.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Constant.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Constant.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Defined.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Defined.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Divisibleby.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Divisibleby.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Even.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Even.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Null.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Null.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Odd.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Odd.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Sameas.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Sameas.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Test.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/Neg.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/._Neg.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/Not.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/._Not.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/Pos.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/._Pos.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Unary \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Unary.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Expression \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Expression.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Flush.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Flush.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/For.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._For.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/ForLoop.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._ForLoop.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/If.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._If.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Import.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Import.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Include.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Include.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Macro.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Macro.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Module.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Module.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Print.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Print.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Sandbox.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/SandboxedPrint.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._SandboxedPrint.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Set.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Set.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/SetTemp.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._SetTemp.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Spaceless.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Spaceless.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Node \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Node.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeOutputInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeOutputInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeTraverser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeTraverser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/Escaper.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._Escaper.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/Optimizer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._Optimizer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/SafeAnalysis.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._SafeAnalysis.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeVisitorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Parser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ParserInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/Blackfire.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/._Blackfire.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/Html.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/._Html.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/._Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._Dumper \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/EnterProfile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/._EnterProfile.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/LeaveProfile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/._LeaveProfile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._Node \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/NodeVisitor/Profiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/NodeVisitor/._Profiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._NodeVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Profile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._Profile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Profiler \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityNotAllowedFilterError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityNotAllowedFilterError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityNotAllowedFunctionError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityNotAllowedFunctionError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityNotAllowedTagError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityNotAllowedTagError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityPolicy.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityPolicy.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityPolicyInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityPolicyInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Sandbox \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/SimpleFilter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._SimpleFilter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/SimpleFunction.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._SimpleFunction.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/SimpleTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._SimpleTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Template.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Template.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TemplateInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TemplateInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/IntegrationTestCase.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._IntegrationTestCase.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/Method.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._Method.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._Node.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/NodeTestCase.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._NodeTestCase.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Test.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TestCallableInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TestCallableInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TestInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TestInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Token.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Token.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/AutoEscape.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._AutoEscape.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Block.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Block.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Do.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Do.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Embed.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Embed.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Extends.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Extends.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Filter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Filter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Flush.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Flush.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/For.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._For.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/From.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._From.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/If.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._If.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Import.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Import.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Include.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Include.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Macro.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Macro.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Sandbox.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Set.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Set.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Spaceless.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Spaceless.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Use.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Use.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParser \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParserBroker.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParserBroker.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParserBrokerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParserBrokerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParserInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenStream.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenStream.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/DeprecationCollector.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/._DeprecationCollector.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/TemplateDirIterator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/._TemplateDirIterator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Util \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/._Twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._lib \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._LICENSE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/phpunit.xml.dist \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._phpunit.xml.dist \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/README.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._README.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/bootstrap.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/._bootstrap.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/AutoloaderTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._AutoloaderTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Cache/FilesystemTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Cache/._FilesystemTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Cache \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/CompilerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._CompilerTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/EnvironmentTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._EnvironmentTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/ErrorTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._ErrorTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/escapingTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._escapingTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/ExpressionParserTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._ExpressionParserTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/CoreTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/._CoreTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/SandboxTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/._SandboxTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Extension \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/FileCachingTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._FileCachingTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/FileExtensionEscapingStrategyTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._FileExtensionEscapingStrategyTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/FilesystemHelper.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._FilesystemHelper.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/autoescape/filename.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/autoescape/._filename.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._autoescape \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/base.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/._base.html \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._errors \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_array_with_undefined_variable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_array_with_undefined_variable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_array_with_undefined_variable_again.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_array_with_undefined_variable_again.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_function_with_undefined_variable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_function_with_undefined_variable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_function_with_unknown_argument.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_function_with_unknown_argument.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_tag_with_undefined_variable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_tag_with_undefined_variable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/syntax_error_in_reused_template.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._syntax_error_in_reused_template.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/unclosed_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._unclosed_tag.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/undefined_parent.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._undefined_parent.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/undefined_template_in_child_template.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._undefined_template_in_child_template.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/undefined_trait.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._undefined_trait.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._exceptions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/array_call.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._array_call.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/binary.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._binary.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/bitwise.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._bitwise.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/comparison.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._comparison.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/divisibleby.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._divisibleby.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/dotdot.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._dotdot.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ends_with.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ends_with.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/grouping.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._grouping.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/literals.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._literals.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/magic_call.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._magic_call.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/matches.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._matches.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/method_call.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._method_call.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/negative_numbers.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._negative_numbers.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/operators_as_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._operators_as_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/postfix.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._postfix.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/sameas.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._sameas.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/starts_with.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._starts_with.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/strings.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._strings.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ternary_operator.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ternary_operator.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ternary_operator_noelse.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ternary_operator_noelse.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ternary_operator_nothen.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ternary_operator_nothen.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/two_word_operators_as_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._two_word_operators_as_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/unary.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._unary.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/unary_macro_arguments.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._unary_macro_arguments.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/unary_precedence.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._unary_precedence.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._expressions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/abs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._abs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_float.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_float.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_empty_fill.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_empty_fill.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_exact_elements.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_exact_elements.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_fill.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_fill.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_keys.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_keys.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_zero_elements.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_zero_elements.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/convert_encoding.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._convert_encoding.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_default_format.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_default_format.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_default_format_interval.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_default_format_interval.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_immutable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_immutable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_interval.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_interval.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_modify.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_modify.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_namedargs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_namedargs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/default.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._default.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/dynamic_filter.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._dynamic_filter.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/escape.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._escape.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/escape_html_attr.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._escape_html_attr.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/escape_non_supported_charset.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._escape_non_supported_charset.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/first.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._first.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/force_escape.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._force_escape.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/format.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._format.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/join.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._join.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/json_encode.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._json_encode.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/last.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._last.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/length.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._length.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/length_utf8.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._length_utf8.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/merge.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._merge.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/nl2br.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._nl2br.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/number_format.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._number_format.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/number_format_default.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._number_format_default.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/replace.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._replace.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/replace_invalid_arg.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._replace_invalid_arg.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/reverse.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._reverse.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/round.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._round.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/slice.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._slice.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/sort.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._sort.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/split.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._split.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/split_utf8.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._split_utf8.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/trim.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._trim.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/urlencode.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._urlencode.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/urlencode_deprecated.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._urlencode_deprecated.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._filters \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/attribute.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._attribute.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/constant.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._constant.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/cycle.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._cycle.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/date.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._date.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/date_namedargs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._date_namedargs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/dump.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._dump.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/dump_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._dump_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/dynamic_function.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._dynamic_function.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/assignment.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._assignment.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/autoescaping.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._autoescaping.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._expression.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/ignore_missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._ignore_missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/missing_nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._missing_nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/sandbox.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._sandbox.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/sandbox_disabling.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._sandbox_disabling.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/sandbox_disabling_ignore_missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._sandbox_disabling_ignore_missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/template_instance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._template_instance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/templates_as_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._templates_as_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/with_context.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._with_context.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._include \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/max.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._max.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/min.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._min.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/range.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._range.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/recursive_block_with_inheritance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._recursive_block_with_inheritance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/source.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._source.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/template_from_string.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._template_from_string.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._functions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/default_values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._default_values.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/nested_calls.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._nested_calls.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/reserved_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._reserved_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._simple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/varargs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._varargs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/varargs_argument.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._varargs_argument.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/with_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._with_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._macros \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/combined_debug_info.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._combined_debug_info.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/empty_token.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._empty_token.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/issue_1143.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._issue_1143.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/multi_word_tests.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._multi_word_tests.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/simple_xml_element.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._simple_xml_element.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/strings_like_numbers.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._strings_like_numbers.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._regression \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/blocks.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._blocks.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/double_escaping.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._double_escaping.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/functions.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._functions.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/literal.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._literal.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/objects.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._objects.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/raw.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._raw.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/strategy.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._strategy.legacy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/strategy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._strategy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/type.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._type.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_filters.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_filters_arguments.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_filters_arguments.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_pre_escape_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_pre_escape_filters.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_preserves_safety_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_preserves_safety_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._autoescape \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/block_unique_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/._block_unique_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/._special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._block \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/error_line.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._error_line.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/with_extends.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._with_extends.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._embed \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/json_encode.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._json_encode.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/with_for_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._with_for_tag.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/with_if_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._with_if_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/condition.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._condition.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/context.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._context.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/else.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._else.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/inner_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._inner_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/keys.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._keys.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/keys_and_values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._keys_and_values.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_context.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_context.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_context_local.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_context_local.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_not_defined.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_not_defined.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_not_defined_cond.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_not_defined_cond.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/nested_else.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._nested_else.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/objects.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._objects.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/objects_countable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._objects_countable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/recursive.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._recursive.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._for \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/from.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._from.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/._expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._if \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._expression.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/ignore_missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._ignore_missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/missing_nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._missing_nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/only.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._only.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/template_instance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._template_instance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/templates_as_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._templates_as_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._include \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/block_expr.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._block_expr.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/block_expr2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._block_expr2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/conditional.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._conditional.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/dynamic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._dynamic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/extends_as_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._extends_as_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/extends_as_array_with_empty_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._extends_as_array_with_empty_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/extends_as_array_with_null_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._extends_as_array_with_null_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/multiple_dynamic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._multiple_dynamic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/nested_blocks.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._nested_blocks.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/nested_blocks_parent_only.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._nested_blocks_parent_only.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/nested_inheritance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._nested_inheritance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_change.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_change.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_in_a_block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_in_a_block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_isolation.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_isolation.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_without_extends.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_without_extends.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_without_extends_but_traits.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_without_extends_but_traits.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/template_instance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._template_instance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/use.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._use.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._inheritance \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/endmacro_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._endmacro_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/external.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._external.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/from.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._from.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/from_with_reserved_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._from_with_reserved_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/global.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._global.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/import_with_reserved_nam.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._import_with_reserved_nam.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/reserved_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._reserved_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/self_import.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._self_import.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/super_globals.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._super_globals.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._macro \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/basic.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/._basic.legacy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/mixed_usage_with_raw.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/._mixed_usage_with_raw.legacy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/whitespace_control.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/._whitespace_control.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._raw \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/not_valid1.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/._not_valid1.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/not_valid2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/._not_valid2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/._simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._sandbox \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/capture-empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._capture-empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/capture.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._capture.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._set \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/spaceless/simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/spaceless/._simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._spaceless \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/trim_block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._trim_block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/aliases.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._aliases.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/deep.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._deep.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/deep_empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._deep_empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/inheritance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._inheritance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/inheritance2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._inheritance2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/multiple_aliases.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._multiple_aliases.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/parent_block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._parent_block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/parent_block2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._parent_block2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/parent_block3.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._parent_block3.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._use \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/mixed_usage_with_raw.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/._mixed_usage_with_raw.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/whitespace_control.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/._whitespace_control.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._verbatim \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._tags \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/constant.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._constant.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/defined.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._defined.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/even.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._even.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/in.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._in.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/in_with_objects.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._in_with_objects.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/iterable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._iterable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/null_coalesce.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._null_coalesce.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/odd.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._odd.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Fixtures \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/IntegrationTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._IntegrationTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LegacyFixtures/test.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LegacyFixtures/._test.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._LegacyFixtures \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LegacyIntegrationTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._LegacyIntegrationTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LexerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._LexerTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/ArrayTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._ArrayTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/ChainTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._ChainTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/FilesystemTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._FilesystemTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_empty_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_empty_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_nonexistent_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_nonexistent_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_null_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_null_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_valid_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_valid_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/spare_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._spare_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._inheritance \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_bis/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_bis/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_bis \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_final/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_final/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_final \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_quater/named_absolute.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_quater/._named_absolute.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_quater \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_ter/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_ter/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_ter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_bis/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_bis/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal_bis \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_final/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_final/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal_final \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_ter/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_ter/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal_ter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme1/blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme1/._blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/._theme1 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme2/blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme2/._blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/._theme2 \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._themes \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._Fixtures \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Loader \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/NativeExtensionTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._NativeExtensionTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/AutoEscapeTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._AutoEscapeTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/BlockReferenceTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._BlockReferenceTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/BlockTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._BlockTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/DoTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._DoTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ArrayTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ArrayTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/AssignNameTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._AssignNameTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/AddTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._AddTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/AndTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._AndTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/ConcatTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._ConcatTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/DivTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._DivTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/FloorDivTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._FloorDivTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/ModTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._ModTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/MulTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._MulTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/OrTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._OrTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/SubTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._SubTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._Binary \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/CallTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._CallTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ConditionalTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ConditionalTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ConstantTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ConstantTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/FilterTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._FilterTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/FunctionTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._FunctionTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/GetAttrTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._GetAttrTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/NameTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._NameTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ParentTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ParentTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/FilterInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/._FilterInclude.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/FunctionInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/._FunctionInclude.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/TestInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/._TestInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._PHP53 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/TestTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._TestTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/NegTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/._NegTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/NotTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/._NotTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/PosTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/._PosTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._Unary \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._Expression \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/ForTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._ForTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/IfTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._IfTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/ImportTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._ImportTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/IncludeTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._IncludeTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/MacroTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._MacroTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/ModuleTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._ModuleTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/PrintTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._PrintTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SandboxedPrintTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SandboxedPrintTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SandboxTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SandboxTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SetTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SetTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SpacelessTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SpacelessTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Node \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/NodeVisitor/OptimizerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/NodeVisitor/._OptimizerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._NodeVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/ParserTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._ParserTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/AbstractTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._AbstractTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/BlackfireTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._BlackfireTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/HtmlTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._HtmlTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/._Dumper \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/ProfileTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/._ProfileTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Profiler \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/TemplateTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._TemplateTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/TokenStreamTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._TokenStreamTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/._Tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/._Twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/._twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/._vendor \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/BaseWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._BaseWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/FeedWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._FeedWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/GetHelpWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._GetHelpWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/IWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._IWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/NewUsersWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._NewUsersWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/QuickPostWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._QuickPostWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/RecentEntriesWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._RecentEntriesWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/UpdatesWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._UpdatesWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/._widgets \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._app \n inflating: /var/www/Craft-2.6.2950/craft/config/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/config/db.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._db.php \n inflating: /var/www/Craft-2.6.2950/craft/config/general.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._general.php \n inflating: /var/www/Craft-2.6.2950/craft/config/redactor/Simple.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/redactor/._Simple.json \n inflating: /var/www/Craft-2.6.2950/craft/config/redactor/Standard.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/redactor/._Standard.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._redactor \n inflating: /var/www/Craft-2.6.2950/craft/config/routes.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._routes.php \n inflating: /var/www/Craft-2.6.2950/craft/config/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._config \n extracting: /var/www/Craft-2.6.2950/craft/plugins/.gitignore \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/plugins/._.gitignore \n inflating: /var/www/Craft-2.6.2950/craft/plugins/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/plugins/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/plugins/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/plugins/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._plugins \n extracting: /var/www/Craft-2.6.2950/craft/storage/.gitignore \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/storage/._.gitignore \n inflating: /var/www/Craft-2.6.2950/craft/storage/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/storage/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/storage/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/storage/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._storage \n inflating: /var/www/Craft-2.6.2950/craft/templates/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/templates/404.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._404.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/_layout.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/.__layout.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._index.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/news/_entry.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/news/.__entry.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/news/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/news/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._news \n inflating: /var/www/Craft-2.6.2950/craft/templates/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._templates \n inflating: /var/www/Craft-2.6.2950/craft/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/._craft \n inflating: /var/www/Craft-2.6.2950/public/htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._htaccess \n inflating: /var/www/Craft-2.6.2950/public/index.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._index.php \n inflating: /var/www/Craft-2.6.2950/public/robots.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._robots.txt \n inflating: /var/www/Craft-2.6.2950/public/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/._public \n inflating: /var/www/Craft-2.6.2950/readme.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/._readme.txt \n inflating: /var/www/__MACOSX/._Craft-2.6.2950 \n"", ""rc"": 0 }, ""gid"": 33, ""group"": ""www-data"", ""handler"": ""ZipArchive"", ""invocation"": { ""module_args"": { ""backup"": null, ""content"": null, ""copy"": true, ""creates"": ""/var/www/craft"", ""delimiter"": null, ""dest"": ""/var/www/"", ""directory_mode"": null, ""exclude"": [], ""extra_opts"": [], ""follow"": false, ""force"": null, ""group"": ""www-data"", ""keep_newer"": false, ""list_files"": false, ""mode"": null, ""original_basename"": ""latest.zip?accept_license=yes"", ""owner"": ""www-data"", ""regexp"": null, ""remote_src"": true, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""https://craftcms.com/latest.zip?accept_license=yes"", ""unsafe_writes"": null, ""validate_certs"": false } }, ""mode"": ""0775"", ""owner"": ""www-data"", ""size"": 4096, ""src"": ""/tmp/ansible_Zurjor/latest.zip?accept_license=yes"", ""state"": ""directory"", ""uid"": 33 } ``` ",True,"Ansible 2.2.0 Unarchive sudden unexpected results - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/####/Documents/workspaces/######/devops-infrastructure/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I've been using this script and all since Ansible 2.1, I moved onto 2.2 rc1 2 weeks ago and this has been working well for setting up new environments. Suddenly today it Unzip's into 2 random folders and Unarchive does not extract to /var/www/ anymore like it used to. It now creates a folder called Craft... whatever the extension is instead of extracting the content of that zip into this directory. It also creates a folder called __MACOSX which contains the original zip and it leaves both of those ugly folders in the directory. Then my next command completely breaks and the whole rest of the playbook that used to work breaks. I'm not entirely sure what changed over the weekend. ##### STEPS TO REPRODUCE ``` - hosts: test remote_user: root tasks: # Download and Unzip CraftCMS - name: download and unzip craft unarchive: src=https://craftcms.com/latest.zip?accept_license=yes dest=/var/www/ remote_src=yes group=www-data owner=www-data creates=/var/www/craft validate_certs=no # Move public/ to html/ - name: rename public/ to html/ shell: mv /var/www/public/* /var/www/html/ && mv /var/www/html/htaccess /var/www/html/.htaccess args: creates: /var/www/html/robots.txt ``` ##### EXPECTED RESULTS What has been happening and that I expect to happen is it downloads the .zip for CraftCMS, it extracts the content of that zip file into /var/www and sets the owner and group of the files to www-data. So I expect to see a /craft and a /public directory inside of /var/www/ after the first task runs. ##### ACTUAL RESULTS ``` <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826 `"" && echo ansible-tmp-1477923506.24-280511070608826=""` echo $HOME/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826 `"" ) && sleep 0'""'""'' <139.59.162.126> PUT /var/folders/7w/bgj8hbf50vdd_4m42s91kkjc0000gq/T/tmpiLC579 TO /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/stat.py <139.59.162.126> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r '[139.59.162.126]' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'chmod u+x /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/ /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/stat.py && sleep 0'""'""'' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r -tt 139.59.162.126 '/bin/sh -c '""'""'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/stat.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1477923506.24-280511070608826/"" > /dev/null 2>&1 && sleep 0'""'""'' Using module file /Library/Python/2.7/site-packages/ansible-2.2.0.0-py2.7.egg/ansible/modules/core/files/unarchive.py <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460 `"" && echo ansible-tmp-1477923509.7-23354923558460=""` echo $HOME/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460 `"" ) && sleep 0'""'""'' <139.59.162.126> PUT /var/folders/7w/bgj8hbf50vdd_4m42s91kkjc0000gq/T/tmpiNyTln TO /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/unarchive.py <139.59.162.126> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r '[139.59.162.126]' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r 139.59.162.126 '/bin/sh -c '""'""'chmod u+x /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/ /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/unarchive.py && sleep 0'""'""'' <139.59.162.126> ESTABLISH SSH CONNECTION FOR USER: root <139.59.162.126> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/admin/.ansible/cp/ansible-ssh-%h-%p-%r -tt 139.59.162.126 '/bin/sh -c '""'""'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/unarchive.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1477923509.7-23354923558460/"" > /dev/null 2>&1 && sleep 0'""'""'' --- a whole lot happens here that just clears the whole terminal almost.. ...2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/ClosureCommand.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._ClosureCommand.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/CommandInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._CommandInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/CreateResponseClassEvent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._CreateResponseClassEvent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/DefaultRequestSerializer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._DefaultRequestSerializer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/DefaultResponseParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._DefaultResponseParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/AliasFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._AliasFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/CompositeFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._CompositeFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/ConcreteClassFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._ConcreteClassFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/FactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._FactoryInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/MapFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._MapFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/ServiceDescriptionFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/Factory/._ServiceDescriptionFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._Factory \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/AbstractRequestVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._AbstractRequestVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/BodyVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._BodyVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/HeaderVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._HeaderVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/JsonVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._JsonVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/PostFieldVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._PostFieldVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/PostFileVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._PostFileVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/QueryVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._QueryVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/RequestVisitorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._RequestVisitorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/ResponseBodyVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._ResponseBodyVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Request/._XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/._Request \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/AbstractResponseVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._AbstractResponseVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/BodyVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._BodyVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/HeaderVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._HeaderVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/JsonVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._JsonVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/ReasonPhraseVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._ReasonPhraseVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/ResponseVisitorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._ResponseVisitorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/StatusCodeVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._StatusCodeVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/Response/._XmlVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/._Response \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/VisitorFlyweight.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/LocationVisitor/._VisitorFlyweight.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._LocationVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/OperationCommand.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._OperationCommand.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/OperationResponseParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._OperationResponseParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/RequestSerializerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._RequestSerializerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/ResponseClassInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._ResponseClassInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/ResponseParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Command/._ResponseParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Command \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/ConfigLoaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._ConfigLoaderInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/Operation.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._Operation.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/OperationInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._OperationInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/Parameter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._Parameter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/SchemaFormatter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._SchemaFormatter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/SchemaValidator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._SchemaValidator.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ServiceDescription.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ServiceDescription.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ServiceDescriptionInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ServiceDescriptionInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ServiceDescriptionLoader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ServiceDescriptionLoader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/ValidatorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Description/._ValidatorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Description \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/CommandException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._CommandException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/CommandTransferException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._CommandTransferException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/DescriptionBuilderException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._DescriptionBuilderException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/InconsistentClientTransferException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._InconsistentClientTransferException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ResponseClassException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ResponseClassException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ServiceBuilderException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ServiceBuilderException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ServiceNotFoundException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ServiceNotFoundException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/ValidationException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Exception/._ValidationException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Exception \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/AbstractResourceIteratorFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._AbstractResourceIteratorFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/CompositeResourceIteratorFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._CompositeResourceIteratorFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/MapResourceIteratorFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._MapResourceIteratorFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/Model.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._Model.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIterator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIterator.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorApplyBatched.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorApplyBatched.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorClassFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorClassFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorFactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorFactoryInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/ResourceIteratorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/Resource/._ResourceIteratorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Service/._Resource \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/._Service \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/PhpStreamRequestFactory.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._PhpStreamRequestFactory.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/Stream.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._Stream.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/StreamInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._StreamInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/StreamRequestFactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/Stream/._StreamRequestFactoryInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/Guzzle/._Stream \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/src/._Guzzle \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/._src \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/UPGRADING.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/guzzle/._UPGRADING.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/guzzle/._guzzle \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._guzzle \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/CHANGELOG.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._CHANGELOG.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Draw/DrawerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Draw/._DrawerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Draw \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Effects/EffectsInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Effects/._EffectsInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Effects \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/Exception.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._Exception.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/InvalidArgumentException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._InvalidArgumentException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/NotSupportedException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._NotSupportedException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/OutOfBoundsException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._OutOfBoundsException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/RuntimeException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Exception/._RuntimeException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Exception \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/Border.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._Border.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/Canvas.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._Canvas.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/Grayscale.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._Grayscale.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/OnPixelBased.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._OnPixelBased.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/RelativeResize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Advanced/._RelativeResize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._Advanced \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/ApplyMask.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._ApplyMask.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Autorotate.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Autorotate.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Copy.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Copy.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Crop.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Crop.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Fill.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Fill.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/FlipHorizontally.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._FlipHorizontally.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/FlipVertically.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._FlipVertically.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Paste.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Paste.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Resize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Resize.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Rotate.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Rotate.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Save.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Save.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Show.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Show.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Strip.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Strip.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/Thumbnail.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._Thumbnail.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/WebOptimization.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Basic/._WebOptimization.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._Basic \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/FilterInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._FilterInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/ImagineAware.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._ImagineAware.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/Transformation.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Filter/._Transformation.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Drawer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Drawer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Effects.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Effects.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Font.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Font.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Image.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Image.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Imagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Imagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gd/._Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Gd \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Drawer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Drawer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Effects.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Effects.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Font.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Font.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Image.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Image.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Imagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Imagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Gmagick/._Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Gmagick \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractFont.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractFont.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractImage.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractImage.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractImagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractImagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/AbstractLayers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._AbstractLayers.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Box.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Box.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/BoxInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._BoxInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/FillInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/._FillInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/Horizontal.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/._Horizontal.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/Linear.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/._Linear.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/Vertical.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/Gradient/._Vertical.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Fill/._Gradient \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Fill \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/FontInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._FontInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/Bucket.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/._Bucket.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/Range.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Histogram/._Range.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Histogram \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ImageInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ImageInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ImagineInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ImagineInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/LayersInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._LayersInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ManipulatorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ManipulatorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/AbstractMetadataReader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._AbstractMetadataReader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/DefaultMetadataReader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._DefaultMetadataReader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/ExifMetadataReader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._ExifMetadataReader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/MetadataBag.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._MetadataBag.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/MetadataReaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Metadata/._MetadataReaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Metadata \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/CMYK.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._CMYK.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/CMYK.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._CMYK.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/ColorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._ColorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/Gray.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._Gray.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Color/._RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._Color \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/ColorParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._ColorParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/Grayscale.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._Grayscale.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/PaletteInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._PaletteInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Palette/._RGB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Palette \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Point/Center.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Point/._Center.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Point \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Point.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Point.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/PointInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._PointInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/Profile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._Profile.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/ProfileInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Image/._ProfileInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Image \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Drawer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Drawer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Effects.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Effects.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Font.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Font.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Image.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Image.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Imagick.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Imagick.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Imagine.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Imagine.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/Imagick/._Layers.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._Imagick \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/Adobe/CMYK/USWebUncoated.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/Adobe/CMYK/._USWebUncoated.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/Adobe/._CMYK \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/._Adobe \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/color.org/sRGB_IEC61966-2-1_black_scaled.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/color.org/._sRGB_IEC61966-2-1_black_scaled.icc \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/._color.org \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/colormanagement.org/ISOcoated_v2_grey1c_bas.ICC \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/colormanagement.org/._ISOcoated_v2_grey1c_bas.ICC \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/resources/._colormanagement.org \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/Imagine/._resources \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/lib/._Imagine \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._lib \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/imagine/imagine/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/imagine/._LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/imagine/._imagine \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._imagine \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/AUTHORS \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._AUTHORS \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/autoload.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._autoload.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/COPYING \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._COPYING \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/INSTALL \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._INSTALL \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/NEWS \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._NEWS \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/da.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._da.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/de.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._de.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/es.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._es.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/fr.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._fr.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/ja.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._ja.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/nl.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._nl.po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/pel.pot \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._pel.pot \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/pl.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/po/._pl.po \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._po \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/README \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._README \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/README.markdown \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._README.markdown \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/Pel.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._Pel.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelConvert.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelConvert.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelDataWindow.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelDataWindow.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelDataWindowOffsetException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelDataWindowOffsetException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelDataWindowWindowException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelDataWindowWindowException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntry.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntry.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryAscii.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryAscii.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryByte.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryByte.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryCopyright.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryCopyright.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryLong.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryLong.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryNumber.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryNumber.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryRational.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryRational.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySByte.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySByte.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryShort.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryShort.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySLong.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySLong.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySRational.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySRational.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntrySShort.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntrySShort.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryTime.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryTime.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryUndefined.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryUndefined.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryUserComment.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryUserComment.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryVersion.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryVersion.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelEntryWindowsString.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelEntryWindowsString.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelExif.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelExif.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelFormat.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelFormat.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelIfd.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelIfd.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelIfdException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelIfdException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelInvalidArgumentException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelInvalidArgumentException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelInvalidDataException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelInvalidDataException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpeg.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpeg.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegComment.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegComment.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegContent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegContent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegInvalidMarkerException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegInvalidMarkerException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelJpegMarker.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelJpegMarker.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelOverflowException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelOverflowException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelTag.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelTag.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelTiff.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelTiff.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelUnexpectedFormatException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelUnexpectedFormatException.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/PelWrongComponentCountException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/src/._PelWrongComponentCountException.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/pel/._src \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/lsolesen/._pel \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._lsolesen \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/gnu-lgpl.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._gnu-lgpl.txt \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/pclzip.lib.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._pclzip.lib.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/readme.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/pclzip/._readme.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/pclzip/._pclzip \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._pclzip \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/changelog.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._changelog.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.phpmailer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.phpmailer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.phpmaileroauth.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.phpmaileroauth.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.phpmaileroauthgoogle.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.phpmaileroauthgoogle.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.pop3.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.pop3.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/class.smtp.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._class.smtp.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/composer.lock \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._composer.lock \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/EasyPeasyICS.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._EasyPeasyICS.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/htmlfilter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._htmlfilter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/ntlm_sasl_client.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._ntlm_sasl_client.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/README.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/extras/._README.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._extras \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/get_oauth_token.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._get_oauth_token.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-am.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-am.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ar.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ar.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-az.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-az.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-be.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-be.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-bg.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-bg.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-br.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-br.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ca.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ca.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ch.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ch.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-cz.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-cz.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-de.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-de.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-dk.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-dk.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-el.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-el.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-eo.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-eo.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-es.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-es.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-et.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-et.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fa.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fa.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fi.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fi.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fo.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fo.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-fr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-fr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-gl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-gl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-he.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-he.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-hr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-hr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-hu.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-hu.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-id.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-id.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-it.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-it.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ja.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ja.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ka.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ka.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ko.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ko.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-lt.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-lt.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-lv.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-lv.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ms.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ms.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-nl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-nl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-no.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-no.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-pl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-pl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-pt.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-pt.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ro.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ro.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-ru.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-ru.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-se.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-se.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-sk.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-sk.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-sl.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-sl.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-sr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-sr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-tr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-tr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-uk.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-uk.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-vi.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-vi.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-zh.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-zh.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/phpmailer.lang-zh_cn.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/language/._phpmailer.lang-zh_cn.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._language \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._LICENSE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/PHPMailerAutoload.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._PHPMailerAutoload.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/VERSION \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/phpmailer/._VERSION \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/phpmailer/._phpmailer \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._phpmailer \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/autoloader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._autoloader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/idna_convert.class.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._idna_convert.class.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/LICENCE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._LICENCE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/npdata.ser \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._npdata.ser \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/ReadMe.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/idn/._ReadMe.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._idn \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Author.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Author.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/Base.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._Base.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/DB.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._DB.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/File.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._File.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/Memcache.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._Memcache.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/MySQL.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache/._MySQL.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Cache \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Cache.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Cache.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Caption.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Caption.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Category.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Category.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Content/Type/Sniffer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Content/Type/._Sniffer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Content/._Type \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Content \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Copyright.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Copyright.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Core.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Core.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Credit.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Credit.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Decode/HTML/Entities.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Decode/HTML/._Entities.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Decode/._HTML \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Decode \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Enclosure.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Enclosure.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Exception.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Exception.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/File.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._File.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/gzdecode.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._gzdecode.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/HTTP/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/HTTP/._Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._HTTP \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/IRI.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._IRI.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Item.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Item.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Locator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Locator.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Misc.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Misc.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Net/IPv6.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Net/._IPv6.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Net \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Parse/Date.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Parse/._Date.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Parse \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Parser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Rating.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Rating.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Registry.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Registry.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Restriction.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Restriction.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Sanitize.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Sanitize.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/Source.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._Source.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/XML/Declaration/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/XML/Declaration/._Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/XML/._Declaration \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie/._XML \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/._SimplePie \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/SimplePie.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/library/._SimplePie.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._library \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/LICENSE.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/simplepie/._LICENSE.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/simplepie/._simplepie \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._simplepie \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/CHANGELOG.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._CHANGELOG.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/ContainerAwareEventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._ContainerAwareEventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/TraceableEventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/._TraceableEventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/TraceableEventDispatcherInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/._TraceableEventDispatcherInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/WrappedListener.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Debug/._WrappedListener.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._Debug \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/DependencyInjection/RegisterListenersPass.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/DependencyInjection/._RegisterListenersPass.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._DependencyInjection \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Event.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._Event.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/EventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._EventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/EventDispatcherInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._EventDispatcherInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/EventSubscriberInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._EventSubscriberInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/GenericEvent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._GenericEvent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/ImmutableEventDispatcher.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._ImmutableEventDispatcher.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._LICENSE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/phpunit.xml.dist \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._phpunit.xml.dist \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/README.md \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._README.md \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/AbstractEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._AbstractEventDispatcherTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/ContainerAwareEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._ContainerAwareEventDispatcherTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/Debug/TraceableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/Debug/._TraceableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._Debug \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/DependencyInjection/RegisterListenersPassTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/DependencyInjection/._RegisterListenersPassTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._DependencyInjection \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/EventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._EventDispatcherTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/EventTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._EventTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/GenericEventTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._GenericEventTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/ImmutableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/Tests/._ImmutableEventDispatcherTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/event-dispatcher/._Tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/symfony/._event-dispatcher \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._symfony \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/CHANGELOG \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._CHANGELOG \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/composer.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._composer.json \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/advanced.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._advanced.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/advanced_legacy.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._advanced_legacy.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/api.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._api.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/coding_standards.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._coding_standards.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/deprecated.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._deprecated.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/abs.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._abs.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/batch.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._batch.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/capitalize.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._capitalize.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/convert_encoding.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._convert_encoding.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/date.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._date.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/date_modify.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._date_modify.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/default.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._default.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/escape.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._escape.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/first.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._first.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/format.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._format.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/join.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._join.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/json_encode.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._json_encode.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/keys.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._keys.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/last.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._last.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/length.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._length.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/lower.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._lower.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/merge.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._merge.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/nl2br.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._nl2br.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/number_format.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._number_format.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/raw.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._raw.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/replace.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._replace.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/reverse.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._reverse.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/round.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._round.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/slice.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._slice.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/sort.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._sort.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/split.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._split.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/striptags.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._striptags.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/title.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._title.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/trim.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._trim.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/upper.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._upper.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/url_encode.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/filters/._url_encode.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._filters \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/attribute.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._attribute.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/block.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._block.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/constant.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._constant.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/cycle.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._cycle.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/date.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._date.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/dump.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._dump.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/include.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._include.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/max.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._max.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/min.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._min.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/parent.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._parent.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/random.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._random.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/range.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._range.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/source.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._source.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/template_from_string.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/functions/._template_from_string.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._functions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/installation.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._installation.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/internals.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._internals.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/intro.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._intro.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/recipes.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._recipes.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/autoescape.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._autoescape.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/block.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._block.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/do.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._do.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/embed.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._embed.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/extends.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._extends.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/filter.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._filter.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/flush.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._flush.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/for.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._for.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/from.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._from.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/if.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._if.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/import.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._import.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/include.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._include.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/macro.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._macro.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/sandbox.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._sandbox.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/set.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._set.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/spaceless.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._spaceless.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/use.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._use.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/verbatim.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tags/._verbatim.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._tags \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/templates.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._templates.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/constant.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._constant.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/defined.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._defined.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/divisibleby.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._divisibleby.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/empty.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._empty.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/even.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._even.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/index.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._index.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/iterable.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._iterable.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/null.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._null.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/odd.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._odd.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/sameas.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/tests/._sameas.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/doc/._tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._doc \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/config.m4 \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._config.m4 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/config.w32 \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._config.w32 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/php_twig.h \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._php_twig.h \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/twig.c \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/twig/._twig.c \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/ext/._twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._ext \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Autoloader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Autoloader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/BaseNodeVisitor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._BaseNodeVisitor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/Filesystem.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/._Filesystem.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/Null.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Cache/._Null.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Cache \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/CacheInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._CacheInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Compiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Compiler.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/CompilerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._CompilerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Environment.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Environment.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/Loader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/._Loader.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/Runtime.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/._Runtime.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/Syntax.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error/._Syntax.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Error \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Error.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Error.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ExistsLoaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ExistsLoaderInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ExpressionParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ExpressionParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Core.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Core.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Debug.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Debug.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Escaper.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Escaper.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/GlobalsInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._GlobalsInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/InitRuntimeInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._InitRuntimeInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Optimizer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Optimizer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Profiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Profiler.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Sandbox.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/Staging.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._Staging.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/StringLoader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension/._StringLoader.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Extension \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Extension.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Extension.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ExtensionInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ExtensionInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FileExtensionEscapingStrategy.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FileExtensionEscapingStrategy.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/Method.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/._Method.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter/._Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Filter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Filter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FilterCallableInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FilterCallableInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FilterInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FilterInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/Method.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/._Method.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function/._Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Function \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FunctionCallableInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FunctionCallableInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/FunctionInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._FunctionInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Lexer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Lexer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/LexerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._LexerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/Array.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._Array.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/Chain.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._Chain.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/Filesystem.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._Filesystem.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/String.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Loader/._String.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Loader \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/LoaderInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._LoaderInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Markup.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Markup.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/AutoEscape.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._AutoEscape.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Block.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Block.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/BlockReference.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._BlockReference.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Body.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Body.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/CheckSecurity.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._CheckSecurity.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Do.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Do.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Embed.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Embed.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Array.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Array.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/AssignName.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._AssignName.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Add.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Add.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/And.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._And.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/BitwiseAnd.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._BitwiseAnd.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/BitwiseOr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._BitwiseOr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/BitwiseXor.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._BitwiseXor.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Concat.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Concat.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Div.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Div.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/EndsWith.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._EndsWith.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Equal.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Equal.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/FloorDiv.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._FloorDiv.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Greater.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Greater.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/GreaterEqual.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._GreaterEqual.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/In.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._In.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Less.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Less.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/LessEqual.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._LessEqual.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Matches.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Matches.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Mod.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Mod.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Mul.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Mul.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/NotEqual.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._NotEqual.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/NotIn.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._NotIn.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Or.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Or.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Power.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Power.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Range.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Range.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/StartsWith.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._StartsWith.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/Sub.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary/._Sub.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Binary \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Binary.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Binary.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/BlockReference.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._BlockReference.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Call.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Call.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Conditional.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Conditional.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Constant.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Constant.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/ExtensionReference.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._ExtensionReference.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Filter/Default.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Filter/._Default.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Filter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Filter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/GetAttr.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._GetAttr.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/MethodCall.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._MethodCall.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Name.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Name.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/NullCoalesce.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._NullCoalesce.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Parent.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Parent.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/TempName.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._TempName.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Constant.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Constant.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Defined.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Defined.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Divisibleby.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Divisibleby.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Even.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Even.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Null.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Null.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Odd.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Odd.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/Sameas.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test/._Sameas.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Test.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Test.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/Neg.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/._Neg.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/Not.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/._Not.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/Pos.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary/._Pos.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Unary \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/Unary.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression/._Unary.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Expression \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Expression.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Expression.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Flush.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Flush.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/For.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._For.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/ForLoop.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._ForLoop.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/If.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._If.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Import.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Import.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Include.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Include.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Macro.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Macro.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Module.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Module.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Print.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Print.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Sandbox.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/SandboxedPrint.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._SandboxedPrint.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Set.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Set.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/SetTemp.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._SetTemp.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Spaceless.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Spaceless.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node/._Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Node \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Node.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeOutputInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeOutputInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeTraverser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeTraverser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/Escaper.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._Escaper.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/Optimizer.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._Optimizer.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/SafeAnalysis.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._SafeAnalysis.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitor/._Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/NodeVisitorInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._NodeVisitorInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Parser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Parser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/ParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._ParserInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/Blackfire.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/._Blackfire.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/Html.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/._Html.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Dumper/._Text.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._Dumper \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/EnterProfile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/._EnterProfile.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/LeaveProfile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Node/._LeaveProfile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._Node \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/NodeVisitor/Profiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/NodeVisitor/._Profiler.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._NodeVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/Profile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Profiler/._Profile.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Profiler \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityNotAllowedFilterError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityNotAllowedFilterError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityNotAllowedFunctionError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityNotAllowedFunctionError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityNotAllowedTagError.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityNotAllowedTagError.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityPolicy.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityPolicy.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/SecurityPolicyInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Sandbox/._SecurityPolicyInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Sandbox \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/SimpleFilter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._SimpleFilter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/SimpleFunction.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._SimpleFunction.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/SimpleTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._SimpleTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Template.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Template.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TemplateInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TemplateInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/Function.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._Function.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/IntegrationTestCase.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._IntegrationTestCase.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/Method.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._Method.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/Node.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._Node.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/NodeTestCase.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test/._NodeTestCase.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Test.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Test.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TestCallableInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TestCallableInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TestInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TestInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Token.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Token.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/AutoEscape.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._AutoEscape.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Block.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Block.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Do.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Do.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Embed.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Embed.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Extends.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Extends.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Filter.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Filter.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Flush.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Flush.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/For.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._For.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/From.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._From.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/If.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._If.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Import.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Import.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Include.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Include.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Macro.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Macro.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Sandbox.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Sandbox.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Set.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Set.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Spaceless.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Spaceless.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/Use.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser/._Use.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParser \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParser.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParser.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParserBroker.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParserBroker.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParserBrokerInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParserBrokerInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenParserInterface.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenParserInterface.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/TokenStream.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._TokenStream.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/DeprecationCollector.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/._DeprecationCollector.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/TemplateDirIterator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/Util/._TemplateDirIterator.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/Twig/._Util \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/lib/._Twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._lib \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/LICENSE \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._LICENSE \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/phpunit.xml.dist \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._phpunit.xml.dist \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/README.rst \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._README.rst \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/bootstrap.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/._bootstrap.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/AutoloaderTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._AutoloaderTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Cache/FilesystemTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Cache/._FilesystemTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Cache \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/CompilerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._CompilerTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/EnvironmentTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._EnvironmentTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/ErrorTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._ErrorTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/escapingTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._escapingTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/ExpressionParserTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._ExpressionParserTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/CoreTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/._CoreTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/SandboxTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Extension/._SandboxTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Extension \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/FileCachingTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._FileCachingTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/FileExtensionEscapingStrategyTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._FileExtensionEscapingStrategyTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/FilesystemHelper.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._FilesystemHelper.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/autoescape/filename.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/autoescape/._filename.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._autoescape \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/base.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/._base.html \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/errors/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._errors \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_array_with_undefined_variable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_array_with_undefined_variable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_array_with_undefined_variable_again.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_array_with_undefined_variable_again.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_function_with_undefined_variable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_function_with_undefined_variable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_function_with_unknown_argument.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_function_with_unknown_argument.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/multiline_tag_with_undefined_variable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._multiline_tag_with_undefined_variable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/syntax_error_in_reused_template.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._syntax_error_in_reused_template.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/unclosed_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._unclosed_tag.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/undefined_parent.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._undefined_parent.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/undefined_template_in_child_template.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._undefined_template_in_child_template.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/undefined_trait.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/exceptions/._undefined_trait.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._exceptions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/array_call.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._array_call.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/binary.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._binary.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/bitwise.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._bitwise.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/comparison.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._comparison.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/divisibleby.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._divisibleby.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/dotdot.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._dotdot.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ends_with.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ends_with.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/grouping.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._grouping.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/literals.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._literals.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/magic_call.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._magic_call.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/matches.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._matches.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/method_call.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._method_call.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/negative_numbers.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._negative_numbers.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/operators_as_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._operators_as_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/postfix.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._postfix.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/sameas.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._sameas.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/starts_with.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._starts_with.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/strings.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._strings.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ternary_operator.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ternary_operator.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ternary_operator_noelse.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ternary_operator_noelse.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/ternary_operator_nothen.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._ternary_operator_nothen.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/two_word_operators_as_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._two_word_operators_as_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/unary.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._unary.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/unary_macro_arguments.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._unary_macro_arguments.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/unary_precedence.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/expressions/._unary_precedence.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._expressions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/abs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._abs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_float.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_float.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_empty_fill.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_empty_fill.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_exact_elements.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_exact_elements.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_fill.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_fill.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_keys.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_keys.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/batch_with_zero_elements.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._batch_with_zero_elements.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/convert_encoding.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._convert_encoding.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_default_format.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_default_format.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_default_format_interval.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_default_format_interval.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_immutable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_immutable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_interval.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_interval.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_modify.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_modify.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/date_namedargs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._date_namedargs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/default.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._default.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/dynamic_filter.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._dynamic_filter.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/escape.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._escape.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/escape_html_attr.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._escape_html_attr.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/escape_non_supported_charset.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._escape_non_supported_charset.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/first.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._first.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/force_escape.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._force_escape.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/format.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._format.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/join.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._join.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/json_encode.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._json_encode.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/last.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._last.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/length.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._length.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/length_utf8.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._length_utf8.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/merge.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._merge.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/nl2br.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._nl2br.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/number_format.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._number_format.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/number_format_default.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._number_format_default.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/replace.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._replace.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/replace_invalid_arg.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._replace_invalid_arg.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/reverse.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._reverse.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/round.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._round.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/slice.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._slice.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/sort.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._sort.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/split.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._split.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/split_utf8.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._split_utf8.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/trim.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._trim.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/urlencode.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._urlencode.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/urlencode_deprecated.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/filters/._urlencode_deprecated.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._filters \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/attribute.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._attribute.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/constant.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._constant.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/cycle.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._cycle.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/date.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._date.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/date_namedargs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._date_namedargs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/dump.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._dump.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/dump_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._dump_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/dynamic_function.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._dynamic_function.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/assignment.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._assignment.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/autoescaping.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._autoescaping.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._expression.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/ignore_missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._ignore_missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/missing_nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._missing_nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/sandbox.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._sandbox.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/sandbox_disabling.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._sandbox_disabling.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/sandbox_disabling_ignore_missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._sandbox_disabling_ignore_missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/template_instance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._template_instance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/templates_as_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._templates_as_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/with_context.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._with_context.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/include/._with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._include \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/max.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._max.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/min.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._min.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/range.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._range.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/recursive_block_with_inheritance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._recursive_block_with_inheritance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/source.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._source.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/template_from_string.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/functions/._template_from_string.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._functions \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/default_values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._default_values.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/nested_calls.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._nested_calls.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/reserved_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._reserved_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._simple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/varargs.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._varargs.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/varargs_argument.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._varargs_argument.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/with_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/macros/._with_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._macros \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/combined_debug_info.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._combined_debug_info.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/empty_token.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._empty_token.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/issue_1143.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._issue_1143.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/multi_word_tests.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._multi_word_tests.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/simple_xml_element.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._simple_xml_element.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/strings_like_numbers.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/regression/._strings_like_numbers.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._regression \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/blocks.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._blocks.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/double_escaping.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._double_escaping.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/functions.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._functions.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/literal.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._literal.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/objects.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._objects.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/raw.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._raw.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/strategy.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._strategy.legacy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/strategy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._strategy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/type.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._type.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_filters.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_filters_arguments.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_filters_arguments.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_pre_escape_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_pre_escape_filters.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/with_preserves_safety_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/autoescape/._with_preserves_safety_filters.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._autoescape \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/block_unique_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/._block_unique_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/block/._special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._block \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/error_line.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._error_line.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/with_extends.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/embed/._with_extends.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._embed \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/json_encode.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._json_encode.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/with_for_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._with_for_tag.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/with_if_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/filter/._with_if_tag.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._filter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/condition.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._condition.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/context.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._context.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/else.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._else.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/inner_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._inner_variables.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/keys.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._keys.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/keys_and_values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._keys_and_values.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_context.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_context.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_context_local.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_context_local.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_not_defined.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_not_defined.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/loop_not_defined_cond.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._loop_not_defined_cond.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/nested_else.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._nested_else.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/objects.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._objects.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/objects_countable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._objects_countable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/recursive.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._recursive.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/for/._values.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._for \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/from.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._from.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/if/._expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._if \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._expression.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/ignore_missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._ignore_missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/missing.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._missing.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/missing_nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._missing_nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/only.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._only.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/template_instance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._template_instance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/templates_as_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._templates_as_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/include/._with_variables.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._include \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/block_expr.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._block_expr.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/block_expr2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._block_expr2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/conditional.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._conditional.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/dynamic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._dynamic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/extends_as_array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._extends_as_array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/extends_as_array_with_empty_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._extends_as_array_with_empty_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/extends_as_array_with_null_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._extends_as_array_with_null_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/multiple_dynamic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._multiple_dynamic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/nested_blocks.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._nested_blocks.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/nested_blocks_parent_only.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._nested_blocks_parent_only.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/nested_inheritance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._nested_inheritance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_change.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_change.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_in_a_block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_in_a_block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_isolation.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_isolation.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_nested.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_nested.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_without_extends.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_without_extends.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/parent_without_extends_but_traits.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._parent_without_extends_but_traits.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/template_instance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._template_instance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/use.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/inheritance/._use.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._inheritance \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/endmacro_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._endmacro_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/external.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._external.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/from.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._from.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/from_with_reserved_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._from_with_reserved_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/global.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._global.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/import_with_reserved_nam.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._import_with_reserved_nam.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/reserved_name.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._reserved_name.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/self_import.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._self_import.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/super_globals.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/macro/._super_globals.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._macro \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/basic.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/._basic.legacy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/mixed_usage_with_raw.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/._mixed_usage_with_raw.legacy.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/whitespace_control.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/raw/._whitespace_control.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._raw \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/not_valid1.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/._not_valid1.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/not_valid2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/._not_valid2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/sandbox/._simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._sandbox \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/capture-empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._capture-empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/capture.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._capture.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/set/._expression.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._set \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/spaceless/simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/spaceless/._simple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._spaceless \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/special_chars.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._special_chars.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/trim_block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._trim_block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/aliases.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._aliases.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/deep.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._deep.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/deep_empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._deep_empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/inheritance.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._inheritance.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/inheritance2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._inheritance2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/multiple.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._multiple.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/multiple_aliases.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._multiple_aliases.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/parent_block.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._parent_block.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/parent_block2.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._parent_block2.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/parent_block3.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/use/._parent_block3.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._use \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/basic.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/._basic.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/mixed_usage_with_raw.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/._mixed_usage_with_raw.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/whitespace_control.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/verbatim/._whitespace_control.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tags/._verbatim \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._tags \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/array.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._array.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/constant.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._constant.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/defined.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._defined.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/empty.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._empty.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/even.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._even.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/in.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._in.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/in_with_objects.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._in_with_objects.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/iterable.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._iterable.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/null_coalesce.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._null_coalesce.test \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/odd.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/tests/._odd.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Fixtures/._tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Fixtures \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/IntegrationTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._IntegrationTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LegacyFixtures/test.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LegacyFixtures/._test.legacy.test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._LegacyFixtures \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LegacyIntegrationTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._LegacyIntegrationTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/LexerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._LexerTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/ArrayTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._ArrayTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/ChainTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._ChainTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/FilesystemTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._FilesystemTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_empty_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_empty_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_nonexistent_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_nonexistent_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_null_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_null_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/array_inheritance_valid_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._array_inheritance_valid_parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._parent.html.twig \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/spare_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/inheritance/._spare_parent.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._inheritance \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_bis/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_bis/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_bis \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_final/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_final/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_final \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_quater/named_absolute.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_quater/._named_absolute.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_quater \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_ter/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/named_ter/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._named_ter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_bis/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_bis/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal_bis \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_final/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_final/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal_final \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_ter/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/normal_ter/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._normal_ter \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme1/blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme1/._blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/._theme1 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme2/blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/theme2/._blocks.html.twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/themes/._theme2 \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/Fixtures/._themes \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Loader/._Fixtures \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Loader \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/NativeExtensionTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._NativeExtensionTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/AutoEscapeTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._AutoEscapeTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/BlockReferenceTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._BlockReferenceTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/BlockTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._BlockTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/DoTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._DoTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ArrayTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ArrayTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/AssignNameTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._AssignNameTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/AddTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._AddTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/AndTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._AndTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/ConcatTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._ConcatTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/DivTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._DivTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/FloorDivTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._FloorDivTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/ModTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._ModTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/MulTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._MulTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/OrTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._OrTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/SubTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Binary/._SubTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._Binary \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/CallTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._CallTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ConditionalTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ConditionalTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ConstantTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ConstantTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/FilterTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._FilterTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/FunctionTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._FunctionTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/GetAttrTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._GetAttrTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/NameTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._NameTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/ParentTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._ParentTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/FilterInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/._FilterInclude.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/FunctionInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/._FunctionInclude.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/TestInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/PHP53/._TestInclude.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._PHP53 \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/TestTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._TestTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/NegTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/._NegTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/NotTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/._NotTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/PosTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/Unary/._PosTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/Expression/._Unary \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._Expression \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/ForTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._ForTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/IfTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._IfTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/ImportTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._ImportTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/IncludeTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._IncludeTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/MacroTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._MacroTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/ModuleTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._ModuleTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/PrintTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._PrintTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SandboxedPrintTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SandboxedPrintTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SandboxTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SandboxTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SetTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SetTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/SpacelessTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._SpacelessTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Node/._TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Node \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/NodeVisitor/OptimizerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/NodeVisitor/._OptimizerTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._NodeVisitor \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/ParserTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._ParserTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/AbstractTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._AbstractTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/BlackfireTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._BlackfireTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/HtmlTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._HtmlTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/Dumper/._TextTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/._Dumper \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/ProfileTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/Profiler/._ProfileTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._Profiler \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/TemplateTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._TemplateTest.php \n inflating: /var/www/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/TokenStreamTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/Tests/._TokenStreamTest.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/Twig/._Tests \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/test/._Twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/twig/._test \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/twig/._twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/vendor/._twig \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/._vendor \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/BaseWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._BaseWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/FeedWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._FeedWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/GetHelpWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._GetHelpWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/IWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._IWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/NewUsersWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._NewUsersWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/QuickPostWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._QuickPostWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/RecentEntriesWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._RecentEntriesWidget.php \n inflating: /var/www/Craft-2.6.2950/craft/app/widgets/UpdatesWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/widgets/._UpdatesWidget.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/app/._widgets \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._app \n inflating: /var/www/Craft-2.6.2950/craft/config/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/config/db.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._db.php \n inflating: /var/www/Craft-2.6.2950/craft/config/general.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._general.php \n inflating: /var/www/Craft-2.6.2950/craft/config/redactor/Simple.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/redactor/._Simple.json \n inflating: /var/www/Craft-2.6.2950/craft/config/redactor/Standard.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/redactor/._Standard.json \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._redactor \n inflating: /var/www/Craft-2.6.2950/craft/config/routes.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._routes.php \n inflating: /var/www/Craft-2.6.2950/craft/config/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/config/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._config \n extracting: /var/www/Craft-2.6.2950/craft/plugins/.gitignore \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/plugins/._.gitignore \n inflating: /var/www/Craft-2.6.2950/craft/plugins/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/plugins/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/plugins/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/plugins/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._plugins \n extracting: /var/www/Craft-2.6.2950/craft/storage/.gitignore \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/storage/._.gitignore \n inflating: /var/www/Craft-2.6.2950/craft/storage/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/storage/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/storage/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/storage/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._storage \n inflating: /var/www/Craft-2.6.2950/craft/templates/.htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._.htaccess \n inflating: /var/www/Craft-2.6.2950/craft/templates/404.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._404.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/_layout.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/.__layout.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._index.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/news/_entry.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/news/.__entry.html \n inflating: /var/www/Craft-2.6.2950/craft/templates/news/index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/news/._index.html \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._news \n inflating: /var/www/Craft-2.6.2950/craft/templates/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/templates/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._templates \n inflating: /var/www/Craft-2.6.2950/craft/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/craft/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/._craft \n inflating: /var/www/Craft-2.6.2950/public/htaccess \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._htaccess \n inflating: /var/www/Craft-2.6.2950/public/index.php \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._index.php \n inflating: /var/www/Craft-2.6.2950/public/robots.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._robots.txt \n inflating: /var/www/Craft-2.6.2950/public/web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/public/._web.config \n inflating: /var/www/__MACOSX/Craft-2.6.2950/._public \n inflating: /var/www/Craft-2.6.2950/readme.txt \n inflating: /var/www/__MACOSX/Craft-2.6.2950/._readme.txt \n inflating: /var/www/__MACOSX/._Craft-2.6.2950 \n"", ""rc"": 0 }, ""gid"": 33, ""group"": ""www-data"", ""handler"": ""ZipArchive"", ""invocation"": { ""module_args"": { ""backup"": null, ""content"": null, ""copy"": true, ""creates"": ""/var/www/craft"", ""delimiter"": null, ""dest"": ""/var/www/"", ""directory_mode"": null, ""exclude"": [], ""extra_opts"": [], ""follow"": false, ""force"": null, ""group"": ""www-data"", ""keep_newer"": false, ""list_files"": false, ""mode"": null, ""original_basename"": ""latest.zip?accept_license=yes"", ""owner"": ""www-data"", ""regexp"": null, ""remote_src"": true, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""https://craftcms.com/latest.zip?accept_license=yes"", ""unsafe_writes"": null, ""validate_certs"": false } }, ""mode"": ""0775"", ""owner"": ""www-data"", ""size"": 4096, ""src"": ""/tmp/ansible_Zurjor/latest.zip?accept_license=yes"", ""state"": ""directory"", ""uid"": 33 } ``` ",1,ansible unarchive sudden unexpected results issue type bug report component name unarchive ansible version ansible config file users documents workspaces devops infrastructure ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment macos summary i ve been using this script and all since ansible i moved onto weeks ago and this has been working well for setting up new environments suddenly today it unzip s into random folders and unarchive does not extract to var www anymore like it used to it now creates a folder called craft whatever the extension is instead of extracting the content of that zip into this directory it also creates a folder called macosx which contains the original zip and it leaves both of those ugly folders in the directory then my next command completely breaks and the whole rest of the playbook that used to work breaks i m not entirely sure what changed over the weekend steps to reproduce download and unzip craftcms name download and unzip craft unarchive src dest var www remote src yes group www data owner www data creates var www craft validate certs no move public to html name rename public to html shell mv var www public var www html mv var www html htaccess var www html htaccess args creates var www html robots txt hosts test remote user root tasks download and unzip craftcms name download and unzip craft unarchive src dest var www remote src yes group www data owner www data creates var www craft validate certs no move public to html name rename public to html shell mv var www public var www html mv var www html htaccess var www html htaccess args creates var www html robots txt expected results what has been happening and that i expect to happen is it downloads the zip for craftcms it extracts the content of that zip file into var www and sets the owner and group of the files to www data so i expect to see a craft and a public directory inside of var www after the first task runs actual results establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t to root ansible tmp ansible tmp stat py ssh exec sftp b vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp stat py sleep establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r tt bin sh c usr bin python root ansible tmp ansible tmp stat py rm rf root ansible tmp ansible tmp dev null sleep using module file library python site packages ansible egg ansible modules core files unarchive py establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpinytln to root ansible tmp ansible tmp unarchive py ssh exec sftp b vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp unarchive py sleep establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users admin ansible cp ansible ssh h p r tt bin sh c usr bin python root ansible tmp ansible tmp unarchive py rm rf root ansible tmp ansible tmp dev null sleep a whole lot happens here that just clears the whole terminal almost craft app vendor guzzle guzzle src guzzle service command closurecommand php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command closurecommand php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command commandinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command commandinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command createresponseclassevent php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command createresponseclassevent php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command defaultrequestserializer php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command defaultrequestserializer php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command defaultresponseparser php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command defaultresponseparser php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command factory aliasfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command factory aliasfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command factory compositefactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command factory compositefactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command factory concreteclassfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command factory concreteclassfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command factory factoryinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command factory factoryinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command factory mapfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command factory mapfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command factory servicedescriptionfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command factory servicedescriptionfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command factory n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request abstractrequestvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request abstractrequestvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request bodyvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request bodyvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request headervisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request headervisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request jsonvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request jsonvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request postfieldvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request postfieldvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request postfilevisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request postfilevisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request queryvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request queryvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request requestvisitorinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request requestvisitorinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request responsebodyvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request responsebodyvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request xmlvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request xmlvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor request n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response abstractresponsevisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response abstractresponsevisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response bodyvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response bodyvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response headervisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response headervisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response jsonvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response jsonvisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response reasonphrasevisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response reasonphrasevisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response responsevisitorinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response responsevisitorinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response statuscodevisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response statuscodevisitor php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response xmlvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response xmlvisitor php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor response n inflating var www craft craft app vendor guzzle guzzle src guzzle service command locationvisitor visitorflyweight php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor visitorflyweight php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command locationvisitor n inflating var www craft craft app vendor guzzle guzzle src guzzle service command operationcommand php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command operationcommand php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command operationresponseparser php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command operationresponseparser php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command requestserializerinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command requestserializerinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command responseclassinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command responseclassinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service command responseparserinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command responseparserinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service command n inflating var www craft craft app vendor guzzle guzzle src guzzle service composer json n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service composer json n inflating var www craft craft app vendor guzzle guzzle src guzzle service configloaderinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service configloaderinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description operation php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description operation php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description operationinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description operationinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description parameter php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description parameter php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description schemaformatter php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description schemaformatter php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description schemavalidator php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description schemavalidator php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description servicedescription php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description servicedescription php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description servicedescriptioninterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description servicedescriptioninterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description servicedescriptionloader php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description servicedescriptionloader php n inflating var www craft craft app vendor guzzle guzzle src guzzle service description validatorinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description validatorinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service description n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception commandexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception commandexception php n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception commandtransferexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception commandtransferexception php n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception descriptionbuilderexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception descriptionbuilderexception php n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception inconsistentclienttransferexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception inconsistentclienttransferexception php n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception responseclassexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception responseclassexception php n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception servicebuilderexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception servicebuilderexception php n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception servicenotfoundexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception servicenotfoundexception php n inflating var www craft craft app vendor guzzle guzzle src guzzle service exception validationexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception validationexception php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service exception n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource abstractresourceiteratorfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource abstractresourceiteratorfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource compositeresourceiteratorfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource compositeresourceiteratorfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource mapresourceiteratorfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource mapresourceiteratorfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource model php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource model php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource resourceiterator php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource resourceiterator php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorapplybatched php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorapplybatched php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorclassfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorclassfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorfactoryinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorfactoryinterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource resourceiteratorinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service resource n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle service n inflating var www craft craft app vendor guzzle guzzle src guzzle stream composer json n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle stream composer json n inflating var www craft craft app vendor guzzle guzzle src guzzle stream phpstreamrequestfactory php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle stream phpstreamrequestfactory php n inflating var www craft craft app vendor guzzle guzzle src guzzle stream stream php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle stream stream php n inflating var www craft craft app vendor guzzle guzzle src guzzle stream streaminterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle stream streaminterface php n inflating var www craft craft app vendor guzzle guzzle src guzzle stream streamrequestfactoryinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle stream streamrequestfactoryinterface php n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle stream n inflating var www macosx craft craft app vendor guzzle guzzle src guzzle n inflating var www macosx craft craft app vendor guzzle guzzle src n inflating var www craft craft app vendor guzzle guzzle upgrading md n inflating var www macosx craft craft app vendor guzzle guzzle upgrading md n inflating var www macosx craft craft app vendor guzzle guzzle n inflating var www macosx craft craft app vendor guzzle n inflating var www craft craft app vendor imagine imagine changelog md n inflating var www macosx craft craft app vendor imagine imagine changelog md n inflating var www craft craft app vendor imagine imagine composer json n inflating var www macosx craft craft app vendor imagine imagine composer json n inflating var www craft craft app vendor imagine imagine lib imagine draw drawerinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine draw drawerinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine draw n inflating var www craft craft app vendor imagine imagine lib imagine effects effectsinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine effects effectsinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine effects n inflating var www craft craft app vendor imagine imagine lib imagine exception exception php n inflating var www macosx craft craft app vendor imagine imagine lib imagine exception exception php n inflating var www craft craft app vendor imagine imagine lib imagine exception invalidargumentexception php n inflating var www macosx craft craft app vendor imagine imagine lib imagine exception invalidargumentexception php n inflating var www craft craft app vendor imagine imagine lib imagine exception notsupportedexception php n inflating var www macosx craft craft app vendor imagine imagine lib imagine exception notsupportedexception php n inflating var www craft craft app vendor imagine imagine lib imagine exception outofboundsexception php n inflating var www macosx craft craft app vendor imagine imagine lib imagine exception outofboundsexception php n inflating var www craft craft app vendor imagine imagine lib imagine exception runtimeexception php n inflating var www macosx craft craft app vendor imagine imagine lib imagine exception runtimeexception php n inflating var www macosx craft craft app vendor imagine imagine lib imagine exception n inflating var www craft craft app vendor imagine imagine lib imagine filter advanced border php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter advanced border php n inflating var www craft craft app vendor imagine imagine lib imagine filter advanced canvas php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter advanced canvas php n inflating var www craft craft app vendor imagine imagine lib imagine filter advanced grayscale php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter advanced grayscale php n inflating var www craft craft app vendor imagine imagine lib imagine filter advanced onpixelbased php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter advanced onpixelbased php n inflating var www craft craft app vendor imagine imagine lib imagine filter advanced relativeresize php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter advanced relativeresize php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter advanced n inflating var www craft craft app vendor imagine imagine lib imagine filter basic applymask php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic applymask php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic autorotate php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic autorotate php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic copy php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic copy php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic crop php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic crop php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic fill php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic fill php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic fliphorizontally php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic fliphorizontally php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic flipvertically php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic flipvertically php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic paste php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic paste php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic resize php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic resize php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic rotate php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic rotate php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic save php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic save php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic show php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic show php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic strip php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic strip php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic thumbnail php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic thumbnail php n inflating var www craft craft app vendor imagine imagine lib imagine filter basic weboptimization php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic weboptimization php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter basic n inflating var www craft craft app vendor imagine imagine lib imagine filter filterinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter filterinterface php n inflating var www craft craft app vendor imagine imagine lib imagine filter imagineaware php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter imagineaware php n inflating var www craft craft app vendor imagine imagine lib imagine filter transformation php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter transformation php n inflating var www macosx craft craft app vendor imagine imagine lib imagine filter n inflating var www craft craft app vendor imagine imagine lib imagine gd drawer php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gd drawer php n inflating var www craft craft app vendor imagine imagine lib imagine gd effects php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gd effects php n inflating var www craft craft app vendor imagine imagine lib imagine gd font php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gd font php n inflating var www craft craft app vendor imagine imagine lib imagine gd image php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gd image php n inflating var www craft craft app vendor imagine imagine lib imagine gd imagine php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gd imagine php n inflating var www craft craft app vendor imagine imagine lib imagine gd layers php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gd layers php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gd n inflating var www craft craft app vendor imagine imagine lib imagine gmagick drawer php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gmagick drawer php n inflating var www craft craft app vendor imagine imagine lib imagine gmagick effects php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gmagick effects php n inflating var www craft craft app vendor imagine imagine lib imagine gmagick font php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gmagick font php n inflating var www craft craft app vendor imagine imagine lib imagine gmagick image php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gmagick image php n inflating var www craft craft app vendor imagine imagine lib imagine gmagick imagine php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gmagick imagine php n inflating var www craft craft app vendor imagine imagine lib imagine gmagick layers php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gmagick layers php n inflating var www macosx craft craft app vendor imagine imagine lib imagine gmagick n inflating var www craft craft app vendor imagine imagine lib imagine image abstractfont php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image abstractfont php n inflating var www craft craft app vendor imagine imagine lib imagine image abstractimage php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image abstractimage php n inflating var www craft craft app vendor imagine imagine lib imagine image abstractimagine php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image abstractimagine php n inflating var www craft craft app vendor imagine imagine lib imagine image abstractlayers php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image abstractlayers php n inflating var www craft craft app vendor imagine imagine lib imagine image box php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image box php n inflating var www craft craft app vendor imagine imagine lib imagine image boxinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image boxinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image fill fillinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image fill fillinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image fill gradient horizontal php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image fill gradient horizontal php n inflating var www craft craft app vendor imagine imagine lib imagine image fill gradient linear php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image fill gradient linear php n inflating var www craft craft app vendor imagine imagine lib imagine image fill gradient vertical php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image fill gradient vertical php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image fill gradient n inflating var www macosx craft craft app vendor imagine imagine lib imagine image fill n inflating var www craft craft app vendor imagine imagine lib imagine image fontinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image fontinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image histogram bucket php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image histogram bucket php n inflating var www craft craft app vendor imagine imagine lib imagine image histogram range php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image histogram range php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image histogram n inflating var www craft craft app vendor imagine imagine lib imagine image imageinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image imageinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image imagineinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image imagineinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image layersinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image layersinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image manipulatorinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image manipulatorinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image metadata abstractmetadatareader php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image metadata abstractmetadatareader php n inflating var www craft craft app vendor imagine imagine lib imagine image metadata defaultmetadatareader php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image metadata defaultmetadatareader php n inflating var www craft craft app vendor imagine imagine lib imagine image metadata exifmetadatareader php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image metadata exifmetadatareader php n inflating var www craft craft app vendor imagine imagine lib imagine image metadata metadatabag php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image metadata metadatabag php n inflating var www craft craft app vendor imagine imagine lib imagine image metadata metadatareaderinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image metadata metadatareaderinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image metadata n inflating var www craft craft app vendor imagine imagine lib imagine image palette cmyk php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette cmyk php n inflating var www craft craft app vendor imagine imagine lib imagine image palette color cmyk php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette color cmyk php n inflating var www craft craft app vendor imagine imagine lib imagine image palette color colorinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette color colorinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image palette color gray php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette color gray php n inflating var www craft craft app vendor imagine imagine lib imagine image palette color rgb php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette color rgb php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette color n inflating var www craft craft app vendor imagine imagine lib imagine image palette colorparser php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette colorparser php n inflating var www craft craft app vendor imagine imagine lib imagine image palette grayscale php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette grayscale php n inflating var www craft craft app vendor imagine imagine lib imagine image palette paletteinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette paletteinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image palette rgb php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette rgb php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image palette n inflating var www craft craft app vendor imagine imagine lib imagine image point center php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image point center php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image point n inflating var www craft craft app vendor imagine imagine lib imagine image point php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image point php n inflating var www craft craft app vendor imagine imagine lib imagine image pointinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image pointinterface php n inflating var www craft craft app vendor imagine imagine lib imagine image profile php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image profile php n inflating var www craft craft app vendor imagine imagine lib imagine image profileinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image profileinterface php n inflating var www macosx craft craft app vendor imagine imagine lib imagine image n inflating var www craft craft app vendor imagine imagine lib imagine imagick drawer php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick drawer php n inflating var www craft craft app vendor imagine imagine lib imagine imagick effects php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick effects php n inflating var www craft craft app vendor imagine imagine lib imagine imagick font php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick font php n inflating var www craft craft app vendor imagine imagine lib imagine imagick image php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick image php n inflating var www craft craft app vendor imagine imagine lib imagine imagick imagick php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick imagick php n inflating var www craft craft app vendor imagine imagine lib imagine imagick imagine php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick imagine php n inflating var www craft craft app vendor imagine imagine lib imagine imagick layers php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick layers php n inflating var www macosx craft craft app vendor imagine imagine lib imagine imagick n inflating var www craft craft app vendor imagine imagine lib imagine resources adobe cmyk uswebuncoated icc n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources adobe cmyk uswebuncoated icc n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources adobe cmyk n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources adobe n inflating var www craft craft app vendor imagine imagine lib imagine resources color org srgb black scaled icc n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources color org srgb black scaled icc n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources color org n inflating var www craft craft app vendor imagine imagine lib imagine resources colormanagement org isocoated bas icc n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources colormanagement org isocoated bas icc n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources colormanagement org n inflating var www macosx craft craft app vendor imagine imagine lib imagine resources n inflating var www macosx craft craft app vendor imagine imagine lib imagine n inflating var www macosx craft craft app vendor imagine imagine lib n inflating var www craft craft app vendor imagine imagine license n inflating var www macosx craft craft app vendor imagine imagine license n inflating var www macosx craft craft app vendor imagine imagine n inflating var www macosx craft craft app vendor imagine n inflating var www craft craft app vendor lsolesen pel authors n inflating var www macosx craft craft app vendor lsolesen pel authors n inflating var www craft craft app vendor lsolesen pel autoload php n inflating var www macosx craft craft app vendor lsolesen pel autoload php n inflating var www craft craft app vendor lsolesen pel composer json n inflating var www macosx craft craft app vendor lsolesen pel composer json n inflating var www craft craft app vendor lsolesen pel copying n inflating var www macosx craft craft app vendor lsolesen pel copying n inflating var www craft craft app vendor lsolesen pel install n inflating var www macosx craft craft app vendor lsolesen pel install n inflating var www craft craft app vendor lsolesen pel news n inflating var www macosx craft craft app vendor lsolesen pel news n inflating var www craft craft app vendor lsolesen pel po da po n inflating var www macosx craft craft app vendor lsolesen pel po da po n inflating var www craft craft app vendor lsolesen pel po de po n inflating var www macosx craft craft app vendor lsolesen pel po de po n inflating var www craft craft app vendor lsolesen pel po es po n inflating var www macosx craft craft app vendor lsolesen pel po es po n inflating var www craft craft app vendor lsolesen pel po fr po n inflating var www macosx craft craft app vendor lsolesen pel po fr po n inflating var www craft craft app vendor lsolesen pel po ja po n inflating var www macosx craft craft app vendor lsolesen pel po ja po n inflating var www craft craft app vendor lsolesen pel po nl po n inflating var www macosx craft craft app vendor lsolesen pel po nl po n inflating var www craft craft app vendor lsolesen pel po pel pot n inflating var www macosx craft craft app vendor lsolesen pel po pel pot n inflating var www craft craft app vendor lsolesen pel po pl po n inflating var www macosx craft craft app vendor lsolesen pel po pl po n inflating var www macosx craft craft app vendor lsolesen pel po n inflating var www craft craft app vendor lsolesen pel readme n inflating var www macosx craft craft app vendor lsolesen pel readme n inflating var www craft craft app vendor lsolesen pel readme markdown n inflating var www macosx craft craft app vendor lsolesen pel readme markdown n inflating var www craft craft app vendor lsolesen pel src pel php n inflating var www macosx craft craft app vendor lsolesen pel src pel php n inflating var www craft craft app vendor lsolesen pel src pelconvert php n inflating var www macosx craft craft app vendor lsolesen pel src pelconvert php n inflating var www craft craft app vendor lsolesen pel src peldatawindow php n inflating var www macosx craft craft app vendor lsolesen pel src peldatawindow php n inflating var www craft craft app vendor lsolesen pel src peldatawindowoffsetexception php n inflating var www macosx craft craft app vendor lsolesen pel src peldatawindowoffsetexception php n inflating var www craft craft app vendor lsolesen pel src peldatawindowwindowexception php n inflating var www macosx craft craft app vendor lsolesen pel src peldatawindowwindowexception php n inflating var www craft craft app vendor lsolesen pel src pelentry php n inflating var www macosx craft craft app vendor lsolesen pel src pelentry php n inflating var www craft craft app vendor lsolesen pel src pelentryascii php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryascii php n inflating var www craft craft app vendor lsolesen pel src pelentrybyte php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrybyte php n inflating var www craft craft app vendor lsolesen pel src pelentrycopyright php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrycopyright php n inflating var www craft craft app vendor lsolesen pel src pelentryexception php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryexception php n inflating var www craft craft app vendor lsolesen pel src pelentrylong php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrylong php n inflating var www craft craft app vendor lsolesen pel src pelentrynumber php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrynumber php n inflating var www craft craft app vendor lsolesen pel src pelentryrational php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryrational php n inflating var www craft craft app vendor lsolesen pel src pelentrysbyte php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrysbyte php n inflating var www craft craft app vendor lsolesen pel src pelentryshort php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryshort php n inflating var www craft craft app vendor lsolesen pel src pelentryslong php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryslong php n inflating var www craft craft app vendor lsolesen pel src pelentrysrational php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrysrational php n inflating var www craft craft app vendor lsolesen pel src pelentrysshort php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrysshort php n inflating var www craft craft app vendor lsolesen pel src pelentrytime php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrytime php n inflating var www craft craft app vendor lsolesen pel src pelentryundefined php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryundefined php n inflating var www craft craft app vendor lsolesen pel src pelentryusercomment php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryusercomment php n inflating var www craft craft app vendor lsolesen pel src pelentryversion php n inflating var www macosx craft craft app vendor lsolesen pel src pelentryversion php n inflating var www craft craft app vendor lsolesen pel src pelentrywindowsstring php n inflating var www macosx craft craft app vendor lsolesen pel src pelentrywindowsstring php n inflating var www craft craft app vendor lsolesen pel src pelexception php n inflating var www macosx craft craft app vendor lsolesen pel src pelexception php n inflating var www craft craft app vendor lsolesen pel src pelexif php n inflating var www macosx craft craft app vendor lsolesen pel src pelexif php n inflating var www craft craft app vendor lsolesen pel src pelformat php n inflating var www macosx craft craft app vendor lsolesen pel src pelformat php n inflating var www craft craft app vendor lsolesen pel src pelifd php n inflating var www macosx craft craft app vendor lsolesen pel src pelifd php n inflating var www craft craft app vendor lsolesen pel src pelifdexception php n inflating var www macosx craft craft app vendor lsolesen pel src pelifdexception php n inflating var www craft craft app vendor lsolesen pel src pelinvalidargumentexception php n inflating var www macosx craft craft app vendor lsolesen pel src pelinvalidargumentexception php n inflating var www craft craft app vendor lsolesen pel src pelinvaliddataexception php n inflating var www macosx craft craft app vendor lsolesen pel src pelinvaliddataexception php n inflating var www craft craft app vendor lsolesen pel src peljpeg php n inflating var www macosx craft craft app vendor lsolesen pel src peljpeg php n inflating var www craft craft app vendor lsolesen pel src peljpegcomment php n inflating var www macosx craft craft app vendor lsolesen pel src peljpegcomment php n inflating var www craft craft app vendor lsolesen pel src peljpegcontent php n inflating var www macosx craft craft app vendor lsolesen pel src peljpegcontent php n inflating var www craft craft app vendor lsolesen pel src peljpeginvalidmarkerexception php n inflating var www macosx craft craft app vendor lsolesen pel src peljpeginvalidmarkerexception php n inflating var www craft craft app vendor lsolesen pel src peljpegmarker php n inflating var www macosx craft craft app vendor lsolesen pel src peljpegmarker php n inflating var www craft craft app vendor lsolesen pel src peloverflowexception php n inflating var www macosx craft craft app vendor lsolesen pel src peloverflowexception php n inflating var www craft craft app vendor lsolesen pel src peltag php n inflating var www macosx craft craft app vendor lsolesen pel src peltag php n inflating var www craft craft app vendor lsolesen pel src peltiff php n inflating var www macosx craft craft app vendor lsolesen pel src peltiff php n inflating var www craft craft app vendor lsolesen pel src pelunexpectedformatexception php n inflating var www macosx craft craft app vendor lsolesen pel src pelunexpectedformatexception php n inflating var www craft craft app vendor lsolesen pel src pelwrongcomponentcountexception php n inflating var www macosx craft craft app vendor lsolesen pel src pelwrongcomponentcountexception php n inflating var www macosx craft craft app vendor lsolesen pel src n inflating var www macosx craft craft app vendor lsolesen pel n inflating var www macosx craft craft app vendor lsolesen n inflating var www craft craft app vendor pclzip pclzip composer json n inflating var www macosx craft craft app vendor pclzip pclzip composer json n inflating var www craft craft app vendor pclzip pclzip gnu lgpl txt n inflating var www macosx craft craft app vendor pclzip pclzip gnu lgpl txt n inflating var www craft craft app vendor pclzip pclzip pclzip lib php n inflating var www macosx craft craft app vendor pclzip pclzip pclzip lib php n inflating var www craft craft app vendor pclzip pclzip readme txt n inflating var www macosx craft craft app vendor pclzip pclzip readme txt n inflating var www macosx craft craft app vendor pclzip pclzip n inflating var www macosx craft craft app vendor pclzip n inflating var www craft craft app vendor phpmailer phpmailer changelog md n inflating var www macosx craft craft app vendor phpmailer phpmailer changelog md n inflating var www craft craft app vendor phpmailer phpmailer class phpmailer php n inflating var www macosx craft craft app vendor phpmailer phpmailer class phpmailer php n inflating var www craft craft app vendor phpmailer phpmailer class phpmaileroauth php n inflating var www macosx craft craft app vendor phpmailer phpmailer class phpmaileroauth php n inflating var www craft craft app vendor phpmailer phpmailer class phpmaileroauthgoogle php n inflating var www macosx craft craft app vendor phpmailer phpmailer class phpmaileroauthgoogle php n inflating var www craft craft app vendor phpmailer phpmailer class php n inflating var www macosx craft craft app vendor phpmailer phpmailer class php n inflating var www craft craft app vendor phpmailer phpmailer class smtp php n inflating var www macosx craft craft app vendor phpmailer phpmailer class smtp php n inflating var www craft craft app vendor phpmailer phpmailer composer json n inflating var www macosx craft craft app vendor phpmailer phpmailer composer json n inflating var www craft craft app vendor phpmailer phpmailer composer lock n inflating var www macosx craft craft app vendor phpmailer phpmailer composer lock n inflating var www craft craft app vendor phpmailer phpmailer extras easypeasyics php n inflating var www macosx craft craft app vendor phpmailer phpmailer extras easypeasyics php n inflating var www craft craft app vendor phpmailer phpmailer extras htmlfilter php n inflating var www macosx craft craft app vendor phpmailer phpmailer extras htmlfilter php n inflating var www craft craft app vendor phpmailer phpmailer extras ntlm sasl client php n inflating var www macosx craft craft app vendor phpmailer phpmailer extras ntlm sasl client php n inflating var www craft craft app vendor phpmailer phpmailer extras readme md n inflating var www macosx craft craft app vendor phpmailer phpmailer extras readme md n inflating var www macosx craft craft app vendor phpmailer phpmailer extras n inflating var www craft craft app vendor phpmailer phpmailer get oauth token php n inflating var www macosx craft craft app vendor phpmailer phpmailer get oauth token php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang am php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang am php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ar php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ar php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang az php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang az php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang be php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang be php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang bg php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang bg php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang br php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang br php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ca php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ca php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ch php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ch php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang cz php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang cz php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang de php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang de php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang dk php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang dk php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang el php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang el php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang eo php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang eo php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang es php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang es php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang et php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang et php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang fa php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang fa php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang fi php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang fi php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang fo php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang fo php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang fr php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang fr php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang gl php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang gl php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang he php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang he php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang hr php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang hr php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang hu php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang hu php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang id php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang id php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang it php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang it php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ja php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ja php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ka php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ka php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ko php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ko php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang lt php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang lt php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang lv php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang lv php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ms php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ms php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang nl php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang nl php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang no php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang no php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang pl php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang pl php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang pt php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang pt php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ro php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ro php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang ru php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang ru php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang se php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang se php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang sk php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang sk php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang sl php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang sl php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang sr php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang sr php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang tr php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang tr php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang uk php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang uk php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang vi php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang vi php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang zh php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang zh php n inflating var www craft craft app vendor phpmailer phpmailer language phpmailer lang zh cn php n inflating var www macosx craft craft app vendor phpmailer phpmailer language phpmailer lang zh cn php n inflating var www macosx craft craft app vendor phpmailer phpmailer language n inflating var www craft craft app vendor phpmailer phpmailer license n inflating var www macosx craft craft app vendor phpmailer phpmailer license n inflating var www craft craft app vendor phpmailer phpmailer phpmailerautoload php n inflating var www macosx craft craft app vendor phpmailer phpmailer phpmailerautoload php n inflating var www craft craft app vendor phpmailer phpmailer version n inflating var www macosx craft craft app vendor phpmailer phpmailer version n inflating var www macosx craft craft app vendor phpmailer phpmailer n inflating var www macosx craft craft app vendor phpmailer n inflating var www craft craft app vendor simplepie simplepie autoloader php n inflating var www macosx craft craft app vendor simplepie simplepie autoloader php n inflating var www craft craft app vendor simplepie simplepie composer json n inflating var www macosx craft craft app vendor simplepie simplepie composer json n inflating var www craft craft app vendor simplepie simplepie idn idna convert class php n inflating var www macosx craft craft app vendor simplepie simplepie idn idna convert class php n inflating var www craft craft app vendor simplepie simplepie idn licence n inflating var www macosx craft craft app vendor simplepie simplepie idn licence n inflating var www craft craft app vendor simplepie simplepie idn npdata ser n inflating var www macosx craft craft app vendor simplepie simplepie idn npdata ser n inflating var www craft craft app vendor simplepie simplepie idn readme txt n inflating var www macosx craft craft app vendor simplepie simplepie idn readme txt n inflating var www macosx craft craft app vendor simplepie simplepie idn n inflating var www craft craft app vendor simplepie simplepie library simplepie author php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie author php n inflating var www craft craft app vendor simplepie simplepie library simplepie cache base php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie cache base php n inflating var www craft craft app vendor simplepie simplepie library simplepie cache db php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie cache db php n inflating var www craft craft app vendor simplepie simplepie library simplepie cache file php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie cache file php n inflating var www craft craft app vendor simplepie simplepie library simplepie cache memcache php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie cache memcache php n inflating var www craft craft app vendor simplepie simplepie library simplepie cache mysql php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie cache mysql php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie cache n inflating var www craft craft app vendor simplepie simplepie library simplepie cache php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie cache php n inflating var www craft craft app vendor simplepie simplepie library simplepie caption php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie caption php n inflating var www craft craft app vendor simplepie simplepie library simplepie category php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie category php n inflating var www craft craft app vendor simplepie simplepie library simplepie content type sniffer php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie content type sniffer php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie content type n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie content n inflating var www craft craft app vendor simplepie simplepie library simplepie copyright php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie copyright php n inflating var www craft craft app vendor simplepie simplepie library simplepie core php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie core php n inflating var www craft craft app vendor simplepie simplepie library simplepie credit php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie credit php n inflating var www craft craft app vendor simplepie simplepie library simplepie decode html entities php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie decode html entities php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie decode html n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie decode n inflating var www craft craft app vendor simplepie simplepie library simplepie enclosure php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie enclosure php n inflating var www craft craft app vendor simplepie simplepie library simplepie exception php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie exception php n inflating var www craft craft app vendor simplepie simplepie library simplepie file php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie file php n inflating var www craft craft app vendor simplepie simplepie library simplepie gzdecode php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie gzdecode php n inflating var www craft craft app vendor simplepie simplepie library simplepie http parser php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie http parser php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie http n inflating var www craft craft app vendor simplepie simplepie library simplepie iri php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie iri php n inflating var www craft craft app vendor simplepie simplepie library simplepie item php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie item php n inflating var www craft craft app vendor simplepie simplepie library simplepie locator php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie locator php n inflating var www craft craft app vendor simplepie simplepie library simplepie misc php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie misc php n inflating var www craft craft app vendor simplepie simplepie library simplepie net php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie net php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie net n inflating var www craft craft app vendor simplepie simplepie library simplepie parse date php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie parse date php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie parse n inflating var www craft craft app vendor simplepie simplepie library simplepie parser php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie parser php n inflating var www craft craft app vendor simplepie simplepie library simplepie rating php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie rating php n inflating var www craft craft app vendor simplepie simplepie library simplepie registry php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie registry php n inflating var www craft craft app vendor simplepie simplepie library simplepie restriction php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie restriction php n inflating var www craft craft app vendor simplepie simplepie library simplepie sanitize php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie sanitize php n inflating var www craft craft app vendor simplepie simplepie library simplepie source php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie source php n inflating var www craft craft app vendor simplepie simplepie library simplepie xml declaration parser php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie xml declaration parser php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie xml declaration n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie xml n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie n inflating var www craft craft app vendor simplepie simplepie library simplepie php n inflating var www macosx craft craft app vendor simplepie simplepie library simplepie php n inflating var www macosx craft craft app vendor simplepie simplepie library n inflating var www craft craft app vendor simplepie simplepie license txt n inflating var www macosx craft craft app vendor simplepie simplepie license txt n inflating var www macosx craft craft app vendor simplepie simplepie n inflating var www macosx craft craft app vendor simplepie n inflating var www craft craft app vendor symfony event dispatcher changelog md n inflating var www macosx craft craft app vendor symfony event dispatcher changelog md n inflating var www craft craft app vendor symfony event dispatcher composer json n inflating var www macosx craft craft app vendor symfony event dispatcher composer json n inflating var www craft craft app vendor symfony event dispatcher containerawareeventdispatcher php n inflating var www macosx craft craft app vendor symfony event dispatcher containerawareeventdispatcher php n inflating var www craft craft app vendor symfony event dispatcher debug traceableeventdispatcher php n inflating var www macosx craft craft app vendor symfony event dispatcher debug traceableeventdispatcher php n inflating var www craft craft app vendor symfony event dispatcher debug traceableeventdispatcherinterface php n inflating var www macosx craft craft app vendor symfony event dispatcher debug traceableeventdispatcherinterface php n inflating var www craft craft app vendor symfony event dispatcher debug wrappedlistener php n inflating var www macosx craft craft app vendor symfony event dispatcher debug wrappedlistener php n inflating var www macosx craft craft app vendor symfony event dispatcher debug n inflating var www craft craft app vendor symfony event dispatcher dependencyinjection registerlistenerspass php n inflating var www macosx craft craft app vendor symfony event dispatcher dependencyinjection registerlistenerspass php n inflating var www macosx craft craft app vendor symfony event dispatcher dependencyinjection n inflating var www craft craft app vendor symfony event dispatcher event php n inflating var www macosx craft craft app vendor symfony event dispatcher event php n inflating var www craft craft app vendor symfony event dispatcher eventdispatcher php n inflating var www macosx craft craft app vendor symfony event dispatcher eventdispatcher php n inflating var www craft craft app vendor symfony event dispatcher eventdispatcherinterface php n inflating var www macosx craft craft app vendor symfony event dispatcher eventdispatcherinterface php n inflating var www craft craft app vendor symfony event dispatcher eventsubscriberinterface php n inflating var www macosx craft craft app vendor symfony event dispatcher eventsubscriberinterface php n inflating var www craft craft app vendor symfony event dispatcher genericevent php n inflating var www macosx craft craft app vendor symfony event dispatcher genericevent php n inflating var www craft craft app vendor symfony event dispatcher immutableeventdispatcher php n inflating var www macosx craft craft app vendor symfony event dispatcher immutableeventdispatcher php n inflating var www craft craft app vendor symfony event dispatcher license n inflating var www macosx craft craft app vendor symfony event dispatcher license n inflating var www craft craft app vendor symfony event dispatcher phpunit xml dist n inflating var www macosx craft craft app vendor symfony event dispatcher phpunit xml dist n inflating var www craft craft app vendor symfony event dispatcher readme md n inflating var www macosx craft craft app vendor symfony event dispatcher readme md n inflating var www craft craft app vendor symfony event dispatcher tests abstracteventdispatchertest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests abstracteventdispatchertest php n inflating var www craft craft app vendor symfony event dispatcher tests containerawareeventdispatchertest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests containerawareeventdispatchertest php n inflating var www craft craft app vendor symfony event dispatcher tests debug traceableeventdispatchertest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests debug traceableeventdispatchertest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests debug n inflating var www craft craft app vendor symfony event dispatcher tests dependencyinjection registerlistenerspasstest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests dependencyinjection registerlistenerspasstest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests dependencyinjection n inflating var www craft craft app vendor symfony event dispatcher tests eventdispatchertest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests eventdispatchertest php n inflating var www craft craft app vendor symfony event dispatcher tests eventtest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests eventtest php n inflating var www craft craft app vendor symfony event dispatcher tests genericeventtest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests genericeventtest php n inflating var www craft craft app vendor symfony event dispatcher tests immutableeventdispatchertest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests immutableeventdispatchertest php n inflating var www macosx craft craft app vendor symfony event dispatcher tests n inflating var www macosx craft craft app vendor symfony event dispatcher n inflating var www macosx craft craft app vendor symfony n inflating var www craft craft app vendor twig twig changelog n inflating var www macosx craft craft app vendor twig twig changelog n inflating var www craft craft app vendor twig twig composer json n inflating var www macosx craft craft app vendor twig twig composer json n inflating var www craft craft app vendor twig twig doc advanced rst n inflating var www macosx craft craft app vendor twig twig doc advanced rst n inflating var www craft craft app vendor twig twig doc advanced legacy rst n inflating var www macosx craft craft app vendor twig twig doc advanced legacy rst n inflating var www craft craft app vendor twig twig doc api rst n inflating var www macosx craft craft app vendor twig twig doc api rst n inflating var www craft craft app vendor twig twig doc coding standards rst n inflating var www macosx craft craft app vendor twig twig doc coding standards rst n inflating var www craft craft app vendor twig twig doc deprecated rst n inflating var www macosx craft craft app vendor twig twig doc deprecated rst n inflating var www craft craft app vendor twig twig doc filters abs rst n inflating var www macosx craft craft app vendor twig twig doc filters abs rst n inflating var www craft craft app vendor twig twig doc filters batch rst n inflating var www macosx craft craft app vendor twig twig doc filters batch rst n inflating var www craft craft app vendor twig twig doc filters capitalize rst n inflating var www macosx craft craft app vendor twig twig doc filters capitalize rst n inflating var www craft craft app vendor twig twig doc filters convert encoding rst n inflating var www macosx craft craft app vendor twig twig doc filters convert encoding rst n inflating var www craft craft app vendor twig twig doc filters date rst n inflating var www macosx craft craft app vendor twig twig doc filters date rst n inflating var www craft craft app vendor twig twig doc filters date modify rst n inflating var www macosx craft craft app vendor twig twig doc filters date modify rst n inflating var www craft craft app vendor twig twig doc filters default rst n inflating var www macosx craft craft app vendor twig twig doc filters default rst n inflating var www craft craft app vendor twig twig doc filters escape rst n inflating var www macosx craft craft app vendor twig twig doc filters escape rst n inflating var www craft craft app vendor twig twig doc filters first rst n inflating var www macosx craft craft app vendor twig twig doc filters first rst n inflating var www craft craft app vendor twig twig doc filters format rst n inflating var www macosx craft craft app vendor twig twig doc filters format rst n inflating var www craft craft app vendor twig twig doc filters index rst n inflating var www macosx craft craft app vendor twig twig doc filters index rst n inflating var www craft craft app vendor twig twig doc filters join rst n inflating var www macosx craft craft app vendor twig twig doc filters join rst n inflating var www craft craft app vendor twig twig doc filters json encode rst n inflating var www macosx craft craft app vendor twig twig doc filters json encode rst n inflating var www craft craft app vendor twig twig doc filters keys rst n inflating var www macosx craft craft app vendor twig twig doc filters keys rst n inflating var www craft craft app vendor twig twig doc filters last rst n inflating var www macosx craft craft app vendor twig twig doc filters last rst n inflating var www craft craft app vendor twig twig doc filters length rst n inflating var www macosx craft craft app vendor twig twig doc filters length rst n inflating var www craft craft app vendor twig twig doc filters lower rst n inflating var www macosx craft craft app vendor twig twig doc filters lower rst n inflating var www craft craft app vendor twig twig doc filters merge rst n inflating var www macosx craft craft app vendor twig twig doc filters merge rst n inflating var www craft craft app vendor twig twig doc filters rst n inflating var www macosx craft craft app vendor twig twig doc filters rst n inflating var www craft craft app vendor twig twig doc filters number format rst n inflating var www macosx craft craft app vendor twig twig doc filters number format rst n inflating var www craft craft app vendor twig twig doc filters raw rst n inflating var www macosx craft craft app vendor twig twig doc filters raw rst n inflating var www craft craft app vendor twig twig doc filters replace rst n inflating var www macosx craft craft app vendor twig twig doc filters replace rst n inflating var www craft craft app vendor twig twig doc filters reverse rst n inflating var www macosx craft craft app vendor twig twig doc filters reverse rst n inflating var www craft craft app vendor twig twig doc filters round rst n inflating var www macosx craft craft app vendor twig twig doc filters round rst n inflating var www craft craft app vendor twig twig doc filters slice rst n inflating var www macosx craft craft app vendor twig twig doc filters slice rst n inflating var www craft craft app vendor twig twig doc filters sort rst n inflating var www macosx craft craft app vendor twig twig doc filters sort rst n inflating var www craft craft app vendor twig twig doc filters split rst n inflating var www macosx craft craft app vendor twig twig doc filters split rst n inflating var www craft craft app vendor twig twig doc filters striptags rst n inflating var www macosx craft craft app vendor twig twig doc filters striptags rst n inflating var www craft craft app vendor twig twig doc filters title rst n inflating var www macosx craft craft app vendor twig twig doc filters title rst n inflating var www craft craft app vendor twig twig doc filters trim rst n inflating var www macosx craft craft app vendor twig twig doc filters trim rst n inflating var www craft craft app vendor twig twig doc filters upper rst n inflating var www macosx craft craft app vendor twig twig doc filters upper rst n inflating var www craft craft app vendor twig twig doc filters url encode rst n inflating var www macosx craft craft app vendor twig twig doc filters url encode rst n inflating var www macosx craft craft app vendor twig twig doc filters n inflating var www craft craft app vendor twig twig doc functions attribute rst n inflating var www macosx craft craft app vendor twig twig doc functions attribute rst n inflating var www craft craft app vendor twig twig doc functions block rst n inflating var www macosx craft craft app vendor twig twig doc functions block rst n inflating var www craft craft app vendor twig twig doc functions constant rst n inflating var www macosx craft craft app vendor twig twig doc functions constant rst n inflating var www craft craft app vendor twig twig doc functions cycle rst n inflating var www macosx craft craft app vendor twig twig doc functions cycle rst n inflating var www craft craft app vendor twig twig doc functions date rst n inflating var www macosx craft craft app vendor twig twig doc functions date rst n inflating var www craft craft app vendor twig twig doc functions dump rst n inflating var www macosx craft craft app vendor twig twig doc functions dump rst n inflating var www craft craft app vendor twig twig doc functions include rst n inflating var www macosx craft craft app vendor twig twig doc functions include rst n inflating var www craft craft app vendor twig twig doc functions index rst n inflating var www macosx craft craft app vendor twig twig doc functions index rst n inflating var www craft craft app vendor twig twig doc functions max rst n inflating var www macosx craft craft app vendor twig twig doc functions max rst n inflating var www craft craft app vendor twig twig doc functions min rst n inflating var www macosx craft craft app vendor twig twig doc functions min rst n inflating var www craft craft app vendor twig twig doc functions parent rst n inflating var www macosx craft craft app vendor twig twig doc functions parent rst n inflating var www craft craft app vendor twig twig doc functions random rst n inflating var www macosx craft craft app vendor twig twig doc functions random rst n inflating var www craft craft app vendor twig twig doc functions range rst n inflating var www macosx craft craft app vendor twig twig doc functions range rst n inflating var www craft craft app vendor twig twig doc functions source rst n inflating var www macosx craft craft app vendor twig twig doc functions source rst n inflating var www craft craft app vendor twig twig doc functions template from string rst n inflating var www macosx craft craft app vendor twig twig doc functions template from string rst n inflating var www macosx craft craft app vendor twig twig doc functions n inflating var www craft craft app vendor twig twig doc index rst n inflating var www macosx craft craft app vendor twig twig doc index rst n inflating var www craft craft app vendor twig twig doc installation rst n inflating var www macosx craft craft app vendor twig twig doc installation rst n inflating var www craft craft app vendor twig twig doc internals rst n inflating var www macosx craft craft app vendor twig twig doc internals rst n inflating var www craft craft app vendor twig twig doc intro rst n inflating var www macosx craft craft app vendor twig twig doc intro rst n inflating var www craft craft app vendor twig twig doc recipes rst n inflating var www macosx craft craft app vendor twig twig doc recipes rst n inflating var www craft craft app vendor twig twig doc tags autoescape rst n inflating var www macosx craft craft app vendor twig twig doc tags autoescape rst n inflating var www craft craft app vendor twig twig doc tags block rst n inflating var www macosx craft craft app vendor twig twig doc tags block rst n inflating var www craft craft app vendor twig twig doc tags do rst n inflating var www macosx craft craft app vendor twig twig doc tags do rst n inflating var www craft craft app vendor twig twig doc tags embed rst n inflating var www macosx craft craft app vendor twig twig doc tags embed rst n inflating var www craft craft app vendor twig twig doc tags extends rst n inflating var www macosx craft craft app vendor twig twig doc tags extends rst n inflating var www craft craft app vendor twig twig doc tags filter rst n inflating var www macosx craft craft app vendor twig twig doc tags filter rst n inflating var www craft craft app vendor twig twig doc tags flush rst n inflating var www macosx craft craft app vendor twig twig doc tags flush rst n inflating var www craft craft app vendor twig twig doc tags for rst n inflating var www macosx craft craft app vendor twig twig doc tags for rst n inflating var www craft craft app vendor twig twig doc tags from rst n inflating var www macosx craft craft app vendor twig twig doc tags from rst n inflating var www craft craft app vendor twig twig doc tags if rst n inflating var www macosx craft craft app vendor twig twig doc tags if rst n inflating var www craft craft app vendor twig twig doc tags import rst n inflating var www macosx craft craft app vendor twig twig doc tags import rst n inflating var www craft craft app vendor twig twig doc tags include rst n inflating var www macosx craft craft app vendor twig twig doc tags include rst n inflating var www craft craft app vendor twig twig doc tags index rst n inflating var www macosx craft craft app vendor twig twig doc tags index rst n inflating var www craft craft app vendor twig twig doc tags macro rst n inflating var www macosx craft craft app vendor twig twig doc tags macro rst n inflating var www craft craft app vendor twig twig doc tags sandbox rst n inflating var www macosx craft craft app vendor twig twig doc tags sandbox rst n inflating var www craft craft app vendor twig twig doc tags set rst n inflating var www macosx craft craft app vendor twig twig doc tags set rst n inflating var www craft craft app vendor twig twig doc tags spaceless rst n inflating var www macosx craft craft app vendor twig twig doc tags spaceless rst n inflating var www craft craft app vendor twig twig doc tags use rst n inflating var www macosx craft craft app vendor twig twig doc tags use rst n inflating var www craft craft app vendor twig twig doc tags verbatim rst n inflating var www macosx craft craft app vendor twig twig doc tags verbatim rst n inflating var www macosx craft craft app vendor twig twig doc tags n inflating var www craft craft app vendor twig twig doc templates rst n inflating var www macosx craft craft app vendor twig twig doc templates rst n inflating var www craft craft app vendor twig twig doc tests constant rst n inflating var www macosx craft craft app vendor twig twig doc tests constant rst n inflating var www craft craft app vendor twig twig doc tests defined rst n inflating var www macosx craft craft app vendor twig twig doc tests defined rst n inflating var www craft craft app vendor twig twig doc tests divisibleby rst n inflating var www macosx craft craft app vendor twig twig doc tests divisibleby rst n inflating var www craft craft app vendor twig twig doc tests empty rst n inflating var www macosx craft craft app vendor twig twig doc tests empty rst n inflating var www craft craft app vendor twig twig doc tests even rst n inflating var www macosx craft craft app vendor twig twig doc tests even rst n inflating var www craft craft app vendor twig twig doc tests index rst n inflating var www macosx craft craft app vendor twig twig doc tests index rst n inflating var www craft craft app vendor twig twig doc tests iterable rst n inflating var www macosx craft craft app vendor twig twig doc tests iterable rst n inflating var www craft craft app vendor twig twig doc tests null rst n inflating var www macosx craft craft app vendor twig twig doc tests null rst n inflating var www craft craft app vendor twig twig doc tests odd rst n inflating var www macosx craft craft app vendor twig twig doc tests odd rst n inflating var www craft craft app vendor twig twig doc tests sameas rst n inflating var www macosx craft craft app vendor twig twig doc tests sameas rst n inflating var www macosx craft craft app vendor twig twig doc tests n inflating var www macosx craft craft app vendor twig twig doc n inflating var www craft craft app vendor twig twig ext twig config n inflating var www macosx craft craft app vendor twig twig ext twig config n inflating var www craft craft app vendor twig twig ext twig config n inflating var www macosx craft craft app vendor twig twig ext twig config n inflating var www craft craft app vendor twig twig ext twig php twig h n inflating var www macosx craft craft app vendor twig twig ext twig php twig h n inflating var www craft craft app vendor twig twig ext twig twig c n inflating var www macosx craft craft app vendor twig twig ext twig twig c n inflating var www macosx craft craft app vendor twig twig ext twig n inflating var www macosx craft craft app vendor twig twig ext n inflating var www craft craft app vendor twig twig lib twig autoloader php n inflating var www macosx craft craft app vendor twig twig lib twig autoloader php n inflating var www craft craft app vendor twig twig lib twig basenodevisitor php n inflating var www macosx craft craft app vendor twig twig lib twig basenodevisitor php n inflating var www craft craft app vendor twig twig lib twig cache filesystem php n inflating var www macosx craft craft app vendor twig twig lib twig cache filesystem php n inflating var www craft craft app vendor twig twig lib twig cache null php n inflating var www macosx craft craft app vendor twig twig lib twig cache null php n inflating var www macosx craft craft app vendor twig twig lib twig cache n inflating var www craft craft app vendor twig twig lib twig cacheinterface php n inflating var www macosx craft craft app vendor twig twig lib twig cacheinterface php n inflating var www craft craft app vendor twig twig lib twig compiler php n inflating var www macosx craft craft app vendor twig twig lib twig compiler php n inflating var www craft craft app vendor twig twig lib twig compilerinterface php n inflating var www macosx craft craft app vendor twig twig lib twig compilerinterface php n inflating var www craft craft app vendor twig twig lib twig environment php n inflating var www macosx craft craft app vendor twig twig lib twig environment php n inflating var www craft craft app vendor twig twig lib twig error loader php n inflating var www macosx craft craft app vendor twig twig lib twig error loader php n inflating var www craft craft app vendor twig twig lib twig error runtime php n inflating var www macosx craft craft app vendor twig twig lib twig error runtime php n inflating var www craft craft app vendor twig twig lib twig error syntax php n inflating var www macosx craft craft app vendor twig twig lib twig error syntax php n inflating var www macosx craft craft app vendor twig twig lib twig error n inflating var www craft craft app vendor twig twig lib twig error php n inflating var www macosx craft craft app vendor twig twig lib twig error php n inflating var www craft craft app vendor twig twig lib twig existsloaderinterface php n inflating var www macosx craft craft app vendor twig twig lib twig existsloaderinterface php n inflating var www craft craft app vendor twig twig lib twig expressionparser php n inflating var www macosx craft craft app vendor twig twig lib twig expressionparser php n inflating var www craft craft app vendor twig twig lib twig extension core php n inflating var www macosx craft craft app vendor twig twig lib twig extension core php n inflating var www craft craft app vendor twig twig lib twig extension debug php n inflating var www macosx craft craft app vendor twig twig lib twig extension debug php n inflating var www craft craft app vendor twig twig lib twig extension escaper php n inflating var www macosx craft craft app vendor twig twig lib twig extension escaper php n inflating var www craft craft app vendor twig twig lib twig extension globalsinterface php n inflating var www macosx craft craft app vendor twig twig lib twig extension globalsinterface php n inflating var www craft craft app vendor twig twig lib twig extension initruntimeinterface php n inflating var www macosx craft craft app vendor twig twig lib twig extension initruntimeinterface php n inflating var www craft craft app vendor twig twig lib twig extension optimizer php n inflating var www macosx craft craft app vendor twig twig lib twig extension optimizer php n inflating var www craft craft app vendor twig twig lib twig extension profiler php n inflating var www macosx craft craft app vendor twig twig lib twig extension profiler php n inflating var www craft craft app vendor twig twig lib twig extension sandbox php n inflating var www macosx craft craft app vendor twig twig lib twig extension sandbox php n inflating var www craft craft app vendor twig twig lib twig extension staging php n inflating var www macosx craft craft app vendor twig twig lib twig extension staging php n inflating var www craft craft app vendor twig twig lib twig extension stringloader php n inflating var www macosx craft craft app vendor twig twig lib twig extension stringloader php n inflating var www macosx craft craft app vendor twig twig lib twig extension n inflating var www craft craft app vendor twig twig lib twig extension php n inflating var www macosx craft craft app vendor twig twig lib twig extension php n inflating var www craft craft app vendor twig twig lib twig extensioninterface php n inflating var www macosx craft craft app vendor twig twig lib twig extensioninterface php n inflating var www craft craft app vendor twig twig lib twig fileextensionescapingstrategy php n inflating var www macosx craft craft app vendor twig twig lib twig fileextensionescapingstrategy php n inflating var www craft craft app vendor twig twig lib twig filter function php n inflating var www macosx craft craft app vendor twig twig lib twig filter function php n inflating var www craft craft app vendor twig twig lib twig filter method php n inflating var www macosx craft craft app vendor twig twig lib twig filter method php n inflating var www craft craft app vendor twig twig lib twig filter node php n inflating var www macosx craft craft app vendor twig twig lib twig filter node php n inflating var www macosx craft craft app vendor twig twig lib twig filter n inflating var www craft craft app vendor twig twig lib twig filter php n inflating var www macosx craft craft app vendor twig twig lib twig filter php n inflating var www craft craft app vendor twig twig lib twig filtercallableinterface php n inflating var www macosx craft craft app vendor twig twig lib twig filtercallableinterface php n inflating var www craft craft app vendor twig twig lib twig filterinterface php n inflating var www macosx craft craft app vendor twig twig lib twig filterinterface php n inflating var www craft craft app vendor twig twig lib twig function function php n inflating var www macosx craft craft app vendor twig twig lib twig function function php n inflating var www craft craft app vendor twig twig lib twig function method php n inflating var www macosx craft craft app vendor twig twig lib twig function method php n inflating var www craft craft app vendor twig twig lib twig function node php n inflating var www macosx craft craft app vendor twig twig lib twig function node php n inflating var www macosx craft craft app vendor twig twig lib twig function n inflating var www craft craft app vendor twig twig lib twig function php n inflating var www macosx craft craft app vendor twig twig lib twig function php n inflating var www craft craft app vendor twig twig lib twig functioncallableinterface php n inflating var www macosx craft craft app vendor twig twig lib twig functioncallableinterface php n inflating var www craft craft app vendor twig twig lib twig functioninterface php n inflating var www macosx craft craft app vendor twig twig lib twig functioninterface php n inflating var www craft craft app vendor twig twig lib twig lexer php n inflating var www macosx craft craft app vendor twig twig lib twig lexer php n inflating var www craft craft app vendor twig twig lib twig lexerinterface php n inflating var www macosx craft craft app vendor twig twig lib twig lexerinterface php n inflating var www craft craft app vendor twig twig lib twig loader array php n inflating var www macosx craft craft app vendor twig twig lib twig loader array php n inflating var www craft craft app vendor twig twig lib twig loader chain php n inflating var www macosx craft craft app vendor twig twig lib twig loader chain php n inflating var www craft craft app vendor twig twig lib twig loader filesystem php n inflating var www macosx craft craft app vendor twig twig lib twig loader filesystem php n inflating var www craft craft app vendor twig twig lib twig loader string php n inflating var www macosx craft craft app vendor twig twig lib twig loader string php n inflating var www macosx craft craft app vendor twig twig lib twig loader n inflating var www craft craft app vendor twig twig lib twig loaderinterface php n inflating var www macosx craft craft app vendor twig twig lib twig loaderinterface php n inflating var www craft craft app vendor twig twig lib twig markup php n inflating var www macosx craft craft app vendor twig twig lib twig markup php n inflating var www craft craft app vendor twig twig lib twig node autoescape php n inflating var www macosx craft craft app vendor twig twig lib twig node autoescape php n inflating var www craft craft app vendor twig twig lib twig node block php n inflating var www macosx craft craft app vendor twig twig lib twig node block php n inflating var www craft craft app vendor twig twig lib twig node blockreference php n inflating var www macosx craft craft app vendor twig twig lib twig node blockreference php n inflating var www craft craft app vendor twig twig lib twig node body php n inflating var www macosx craft craft app vendor twig twig lib twig node body php n inflating var www craft craft app vendor twig twig lib twig node checksecurity php n inflating var www macosx craft craft app vendor twig twig lib twig node checksecurity php n inflating var www craft craft app vendor twig twig lib twig node do php n inflating var www macosx craft craft app vendor twig twig lib twig node do php n inflating var www craft craft app vendor twig twig lib twig node embed php n inflating var www macosx craft craft app vendor twig twig lib twig node embed php n inflating var www craft craft app vendor twig twig lib twig node expression array php n inflating var www macosx craft craft app vendor twig twig lib twig node expression array php n inflating var www craft craft app vendor twig twig lib twig node expression assignname php n inflating var www macosx craft craft app vendor twig twig lib twig node expression assignname php n inflating var www craft craft app vendor twig twig lib twig node expression binary add php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary add php n inflating var www craft craft app vendor twig twig lib twig node expression binary and php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary and php n inflating var www craft craft app vendor twig twig lib twig node expression binary bitwiseand php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary bitwiseand php n inflating var www craft craft app vendor twig twig lib twig node expression binary bitwiseor php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary bitwiseor php n inflating var www craft craft app vendor twig twig lib twig node expression binary bitwisexor php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary bitwisexor php n inflating var www craft craft app vendor twig twig lib twig node expression binary concat php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary concat php n inflating var www craft craft app vendor twig twig lib twig node expression binary div php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary div php n inflating var www craft craft app vendor twig twig lib twig node expression binary endswith php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary endswith php n inflating var www craft craft app vendor twig twig lib twig node expression binary equal php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary equal php n inflating var www craft craft app vendor twig twig lib twig node expression binary floordiv php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary floordiv php n inflating var www craft craft app vendor twig twig lib twig node expression binary greater php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary greater php n inflating var www craft craft app vendor twig twig lib twig node expression binary greaterequal php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary greaterequal php n inflating var www craft craft app vendor twig twig lib twig node expression binary in php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary in php n inflating var www craft craft app vendor twig twig lib twig node expression binary less php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary less php n inflating var www craft craft app vendor twig twig lib twig node expression binary lessequal php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary lessequal php n inflating var www craft craft app vendor twig twig lib twig node expression binary matches php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary matches php n inflating var www craft craft app vendor twig twig lib twig node expression binary mod php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary mod php n inflating var www craft craft app vendor twig twig lib twig node expression binary mul php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary mul php n inflating var www craft craft app vendor twig twig lib twig node expression binary notequal php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary notequal php n inflating var www craft craft app vendor twig twig lib twig node expression binary notin php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary notin php n inflating var www craft craft app vendor twig twig lib twig node expression binary or php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary or php n inflating var www craft craft app vendor twig twig lib twig node expression binary power php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary power php n inflating var www craft craft app vendor twig twig lib twig node expression binary range php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary range php n inflating var www craft craft app vendor twig twig lib twig node expression binary startswith php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary startswith php n inflating var www craft craft app vendor twig twig lib twig node expression binary sub php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary sub php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary n inflating var www craft craft app vendor twig twig lib twig node expression binary php n inflating var www macosx craft craft app vendor twig twig lib twig node expression binary php n inflating var www craft craft app vendor twig twig lib twig node expression blockreference php n inflating var www macosx craft craft app vendor twig twig lib twig node expression blockreference php n inflating var www craft craft app vendor twig twig lib twig node expression call php n inflating var www macosx craft craft app vendor twig twig lib twig node expression call php n inflating var www craft craft app vendor twig twig lib twig node expression conditional php n inflating var www macosx craft craft app vendor twig twig lib twig node expression conditional php n inflating var www craft craft app vendor twig twig lib twig node expression constant php n inflating var www macosx craft craft app vendor twig twig lib twig node expression constant php n inflating var www craft craft app vendor twig twig lib twig node expression extensionreference php n inflating var www macosx craft craft app vendor twig twig lib twig node expression extensionreference php n inflating var www craft craft app vendor twig twig lib twig node expression filter default php n inflating var www macosx craft craft app vendor twig twig lib twig node expression filter default php n inflating var www macosx craft craft app vendor twig twig lib twig node expression filter n inflating var www craft craft app vendor twig twig lib twig node expression filter php n inflating var www macosx craft craft app vendor twig twig lib twig node expression filter php n inflating var www craft craft app vendor twig twig lib twig node expression function php n inflating var www macosx craft craft app vendor twig twig lib twig node expression function php n inflating var www craft craft app vendor twig twig lib twig node expression getattr php n inflating var www macosx craft craft app vendor twig twig lib twig node expression getattr php n inflating var www craft craft app vendor twig twig lib twig node expression methodcall php n inflating var www macosx craft craft app vendor twig twig lib twig node expression methodcall php n inflating var www craft craft app vendor twig twig lib twig node expression name php n inflating var www macosx craft craft app vendor twig twig lib twig node expression name php n inflating var www craft craft app vendor twig twig lib twig node expression nullcoalesce php n inflating var www macosx craft craft app vendor twig twig lib twig node expression nullcoalesce php n inflating var www craft craft app vendor twig twig lib twig node expression parent php n inflating var www macosx craft craft app vendor twig twig lib twig node expression parent php n inflating var www craft craft app vendor twig twig lib twig node expression tempname php n inflating var www macosx craft craft app vendor twig twig lib twig node expression tempname php n inflating var www craft craft app vendor twig twig lib twig node expression test constant php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test constant php n inflating var www craft craft app vendor twig twig lib twig node expression test defined php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test defined php n inflating var www craft craft app vendor twig twig lib twig node expression test divisibleby php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test divisibleby php n inflating var www craft craft app vendor twig twig lib twig node expression test even php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test even php n inflating var www craft craft app vendor twig twig lib twig node expression test null php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test null php n inflating var www craft craft app vendor twig twig lib twig node expression test odd php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test odd php n inflating var www craft craft app vendor twig twig lib twig node expression test sameas php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test sameas php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test n inflating var www craft craft app vendor twig twig lib twig node expression test php n inflating var www macosx craft craft app vendor twig twig lib twig node expression test php n inflating var www craft craft app vendor twig twig lib twig node expression unary neg php n inflating var www macosx craft craft app vendor twig twig lib twig node expression unary neg php n inflating var www craft craft app vendor twig twig lib twig node expression unary not php n inflating var www macosx craft craft app vendor twig twig lib twig node expression unary not php n inflating var www craft craft app vendor twig twig lib twig node expression unary pos php n inflating var www macosx craft craft app vendor twig twig lib twig node expression unary pos php n inflating var www macosx craft craft app vendor twig twig lib twig node expression unary n inflating var www craft craft app vendor twig twig lib twig node expression unary php n inflating var www macosx craft craft app vendor twig twig lib twig node expression unary php n inflating var www macosx craft craft app vendor twig twig lib twig node expression n inflating var www craft craft app vendor twig twig lib twig node expression php n inflating var www macosx craft craft app vendor twig twig lib twig node expression php n inflating var www craft craft app vendor twig twig lib twig node flush php n inflating var www macosx craft craft app vendor twig twig lib twig node flush php n inflating var www craft craft app vendor twig twig lib twig node for php n inflating var www macosx craft craft app vendor twig twig lib twig node for php n inflating var www craft craft app vendor twig twig lib twig node forloop php n inflating var www macosx craft craft app vendor twig twig lib twig node forloop php n inflating var www craft craft app vendor twig twig lib twig node if php n inflating var www macosx craft craft app vendor twig twig lib twig node if php n inflating var www craft craft app vendor twig twig lib twig node import php n inflating var www macosx craft craft app vendor twig twig lib twig node import php n inflating var www craft craft app vendor twig twig lib twig node include php n inflating var www macosx craft craft app vendor twig twig lib twig node include php n inflating var www craft craft app vendor twig twig lib twig node macro php n inflating var www macosx craft craft app vendor twig twig lib twig node macro php n inflating var www craft craft app vendor twig twig lib twig node module php n inflating var www macosx craft craft app vendor twig twig lib twig node module php n inflating var www craft craft app vendor twig twig lib twig node print php n inflating var www macosx craft craft app vendor twig twig lib twig node print php n inflating var www craft craft app vendor twig twig lib twig node sandbox php n inflating var www macosx craft craft app vendor twig twig lib twig node sandbox php n inflating var www craft craft app vendor twig twig lib twig node sandboxedprint php n inflating var www macosx craft craft app vendor twig twig lib twig node sandboxedprint php n inflating var www craft craft app vendor twig twig lib twig node set php n inflating var www macosx craft craft app vendor twig twig lib twig node set php n inflating var www craft craft app vendor twig twig lib twig node settemp php n inflating var www macosx craft craft app vendor twig twig lib twig node settemp php n inflating var www craft craft app vendor twig twig lib twig node spaceless php n inflating var www macosx craft craft app vendor twig twig lib twig node spaceless php n inflating var www craft craft app vendor twig twig lib twig node text php n inflating var www macosx craft craft app vendor twig twig lib twig node text php n inflating var www macosx craft craft app vendor twig twig lib twig node n inflating var www craft craft app vendor twig twig lib twig node php n inflating var www macosx craft craft app vendor twig twig lib twig node php n inflating var www craft craft app vendor twig twig lib twig nodeinterface php n inflating var www macosx craft craft app vendor twig twig lib twig nodeinterface php n inflating var www craft craft app vendor twig twig lib twig nodeoutputinterface php n inflating var www macosx craft craft app vendor twig twig lib twig nodeoutputinterface php n inflating var www craft craft app vendor twig twig lib twig nodetraverser php n inflating var www macosx craft craft app vendor twig twig lib twig nodetraverser php n inflating var www craft craft app vendor twig twig lib twig nodevisitor escaper php n inflating var www macosx craft craft app vendor twig twig lib twig nodevisitor escaper php n inflating var www craft craft app vendor twig twig lib twig nodevisitor optimizer php n inflating var www macosx craft craft app vendor twig twig lib twig nodevisitor optimizer php n inflating var www craft craft app vendor twig twig lib twig nodevisitor safeanalysis php n inflating var www macosx craft craft app vendor twig twig lib twig nodevisitor safeanalysis php n inflating var www craft craft app vendor twig twig lib twig nodevisitor sandbox php n inflating var www macosx craft craft app vendor twig twig lib twig nodevisitor sandbox php n inflating var www macosx craft craft app vendor twig twig lib twig nodevisitor n inflating var www craft craft app vendor twig twig lib twig nodevisitorinterface php n inflating var www macosx craft craft app vendor twig twig lib twig nodevisitorinterface php n inflating var www craft craft app vendor twig twig lib twig parser php n inflating var www macosx craft craft app vendor twig twig lib twig parser php n inflating var www craft craft app vendor twig twig lib twig parserinterface php n inflating var www macosx craft craft app vendor twig twig lib twig parserinterface php n inflating var www craft craft app vendor twig twig lib twig profiler dumper blackfire php n inflating var www macosx craft craft app vendor twig twig lib twig profiler dumper blackfire php n inflating var www craft craft app vendor twig twig lib twig profiler dumper html php n inflating var www macosx craft craft app vendor twig twig lib twig profiler dumper html php n inflating var www craft craft app vendor twig twig lib twig profiler dumper text php n inflating var www macosx craft craft app vendor twig twig lib twig profiler dumper text php n inflating var www macosx craft craft app vendor twig twig lib twig profiler dumper n inflating var www craft craft app vendor twig twig lib twig profiler node enterprofile php n inflating var www macosx craft craft app vendor twig twig lib twig profiler node enterprofile php n inflating var www craft craft app vendor twig twig lib twig profiler node leaveprofile php n inflating var www macosx craft craft app vendor twig twig lib twig profiler node leaveprofile php n inflating var www macosx craft craft app vendor twig twig lib twig profiler node n inflating var www craft craft app vendor twig twig lib twig profiler nodevisitor profiler php n inflating var www macosx craft craft app vendor twig twig lib twig profiler nodevisitor profiler php n inflating var www macosx craft craft app vendor twig twig lib twig profiler nodevisitor n inflating var www craft craft app vendor twig twig lib twig profiler profile php n inflating var www macosx craft craft app vendor twig twig lib twig profiler profile php n inflating var www macosx craft craft app vendor twig twig lib twig profiler n inflating var www craft craft app vendor twig twig lib twig sandbox securityerror php n inflating var www macosx craft craft app vendor twig twig lib twig sandbox securityerror php n inflating var www craft craft app vendor twig twig lib twig sandbox securitynotallowedfiltererror php n inflating var www macosx craft craft app vendor twig twig lib twig sandbox securitynotallowedfiltererror php n inflating var www craft craft app vendor twig twig lib twig sandbox securitynotallowedfunctionerror php n inflating var www macosx craft craft app vendor twig twig lib twig sandbox securitynotallowedfunctionerror php n inflating var www craft craft app vendor twig twig lib twig sandbox securitynotallowedtagerror php n inflating var www macosx craft craft app vendor twig twig lib twig sandbox securitynotallowedtagerror php n inflating var www craft craft app vendor twig twig lib twig sandbox securitypolicy php n inflating var www macosx craft craft app vendor twig twig lib twig sandbox securitypolicy php n inflating var www craft craft app vendor twig twig lib twig sandbox securitypolicyinterface php n inflating var www macosx craft craft app vendor twig twig lib twig sandbox securitypolicyinterface php n inflating var www macosx craft craft app vendor twig twig lib twig sandbox n inflating var www craft craft app vendor twig twig lib twig simplefilter php n inflating var www macosx craft craft app vendor twig twig lib twig simplefilter php n inflating var www craft craft app vendor twig twig lib twig simplefunction php n inflating var www macosx craft craft app vendor twig twig lib twig simplefunction php n inflating var www craft craft app vendor twig twig lib twig simpletest php n inflating var www macosx craft craft app vendor twig twig lib twig simpletest php n inflating var www craft craft app vendor twig twig lib twig template php n inflating var www macosx craft craft app vendor twig twig lib twig template php n inflating var www craft craft app vendor twig twig lib twig templateinterface php n inflating var www macosx craft craft app vendor twig twig lib twig templateinterface php n inflating var www craft craft app vendor twig twig lib twig test function php n inflating var www macosx craft craft app vendor twig twig lib twig test function php n inflating var www craft craft app vendor twig twig lib twig test integrationtestcase php n inflating var www macosx craft craft app vendor twig twig lib twig test integrationtestcase php n inflating var www craft craft app vendor twig twig lib twig test method php n inflating var www macosx craft craft app vendor twig twig lib twig test method php n inflating var www craft craft app vendor twig twig lib twig test node php n inflating var www macosx craft craft app vendor twig twig lib twig test node php n inflating var www craft craft app vendor twig twig lib twig test nodetestcase php n inflating var www macosx craft craft app vendor twig twig lib twig test nodetestcase php n inflating var www macosx craft craft app vendor twig twig lib twig test n inflating var www craft craft app vendor twig twig lib twig test php n inflating var www macosx craft craft app vendor twig twig lib twig test php n inflating var www craft craft app vendor twig twig lib twig testcallableinterface php n inflating var www macosx craft craft app vendor twig twig lib twig testcallableinterface php n inflating var www craft craft app vendor twig twig lib twig testinterface php n inflating var www macosx craft craft app vendor twig twig lib twig testinterface php n inflating var www craft craft app vendor twig twig lib twig token php n inflating var www macosx craft craft app vendor twig twig lib twig token php n inflating var www craft craft app vendor twig twig lib twig tokenparser autoescape php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser autoescape php n inflating var www craft craft app vendor twig twig lib twig tokenparser block php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser block php n inflating var www craft craft app vendor twig twig lib twig tokenparser do php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser do php n inflating var www craft craft app vendor twig twig lib twig tokenparser embed php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser embed php n inflating var www craft craft app vendor twig twig lib twig tokenparser extends php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser extends php n inflating var www craft craft app vendor twig twig lib twig tokenparser filter php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser filter php n inflating var www craft craft app vendor twig twig lib twig tokenparser flush php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser flush php n inflating var www craft craft app vendor twig twig lib twig tokenparser for php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser for php n inflating var www craft craft app vendor twig twig lib twig tokenparser from php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser from php n inflating var www craft craft app vendor twig twig lib twig tokenparser if php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser if php n inflating var www craft craft app vendor twig twig lib twig tokenparser import php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser import php n inflating var www craft craft app vendor twig twig lib twig tokenparser include php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser include php n inflating var www craft craft app vendor twig twig lib twig tokenparser macro php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser macro php n inflating var www craft craft app vendor twig twig lib twig tokenparser sandbox php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser sandbox php n inflating var www craft craft app vendor twig twig lib twig tokenparser set php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser set php n inflating var www craft craft app vendor twig twig lib twig tokenparser spaceless php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser spaceless php n inflating var www craft craft app vendor twig twig lib twig tokenparser use php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser use php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser n inflating var www craft craft app vendor twig twig lib twig tokenparser php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparser php n inflating var www craft craft app vendor twig twig lib twig tokenparserbroker php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparserbroker php n inflating var www craft craft app vendor twig twig lib twig tokenparserbrokerinterface php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparserbrokerinterface php n inflating var www craft craft app vendor twig twig lib twig tokenparserinterface php n inflating var www macosx craft craft app vendor twig twig lib twig tokenparserinterface php n inflating var www craft craft app vendor twig twig lib twig tokenstream php n inflating var www macosx craft craft app vendor twig twig lib twig tokenstream php n inflating var www craft craft app vendor twig twig lib twig util deprecationcollector php n inflating var www macosx craft craft app vendor twig twig lib twig util deprecationcollector php n inflating var www craft craft app vendor twig twig lib twig util templatediriterator php n inflating var www macosx craft craft app vendor twig twig lib twig util templatediriterator php n inflating var www macosx craft craft app vendor twig twig lib twig util n inflating var www macosx craft craft app vendor twig twig lib twig n inflating var www macosx craft craft app vendor twig twig lib n inflating var www craft craft app vendor twig twig license n inflating var www macosx craft craft app vendor twig twig license n inflating var www craft craft app vendor twig twig phpunit xml dist n inflating var www macosx craft craft app vendor twig twig phpunit xml dist n inflating var www craft craft app vendor twig twig readme rst n inflating var www macosx craft craft app vendor twig twig readme rst n inflating var www craft craft app vendor twig twig test bootstrap php n inflating var www macosx craft craft app vendor twig twig test bootstrap php n inflating var www craft craft app vendor twig twig test twig tests autoloadertest php n inflating var www macosx craft craft app vendor twig twig test twig tests autoloadertest php n inflating var www craft craft app vendor twig twig test twig tests cache filesystemtest php n inflating var www macosx craft craft app vendor twig twig test twig tests cache filesystemtest php n inflating var www macosx craft craft app vendor twig twig test twig tests cache n inflating var www craft craft app vendor twig twig test twig tests compilertest php n inflating var www macosx craft craft app vendor twig twig test twig tests compilertest php n inflating var www craft craft app vendor twig twig test twig tests environmenttest php n inflating var www macosx craft craft app vendor twig twig test twig tests environmenttest php n inflating var www craft craft app vendor twig twig test twig tests errortest php n inflating var www macosx craft craft app vendor twig twig test twig tests errortest php n inflating var www craft craft app vendor twig twig test twig tests escapingtest php n inflating var www macosx craft craft app vendor twig twig test twig tests escapingtest php n inflating var www craft craft app vendor twig twig test twig tests expressionparsertest php n inflating var www macosx craft craft app vendor twig twig test twig tests expressionparsertest php n inflating var www craft craft app vendor twig twig test twig tests extension coretest php n inflating var www macosx craft craft app vendor twig twig test twig tests extension coretest php n inflating var www craft craft app vendor twig twig test twig tests extension sandboxtest php n inflating var www macosx craft craft app vendor twig twig test twig tests extension sandboxtest php n inflating var www macosx craft craft app vendor twig twig test twig tests extension n inflating var www craft craft app vendor twig twig test twig tests filecachingtest php n inflating var www macosx craft craft app vendor twig twig test twig tests filecachingtest php n inflating var www craft craft app vendor twig twig test twig tests fileextensionescapingstrategytest php n inflating var www macosx craft craft app vendor twig twig test twig tests fileextensionescapingstrategytest php n inflating var www craft craft app vendor twig twig test twig tests filesystemhelper php n inflating var www macosx craft craft app vendor twig twig test twig tests filesystemhelper php n inflating var www craft craft app vendor twig twig test twig tests fixtures autoescape filename test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures autoescape filename test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures autoescape n inflating var www craft craft app vendor twig twig test twig tests fixtures errors base html n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures errors base html n inflating var www craft craft app vendor twig twig test twig tests fixtures errors index html n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures errors index html n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures errors n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions multiline array with undefined variable test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions multiline array with undefined variable test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions multiline array with undefined variable again test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions multiline array with undefined variable again test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions multiline function with undefined variable test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions multiline function with undefined variable test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions multiline function with unknown argument test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions multiline function with unknown argument test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions multiline tag with undefined variable test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions multiline tag with undefined variable test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions syntax error in reused template test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions syntax error in reused template test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions unclosed tag test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions unclosed tag test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions undefined parent test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions undefined parent test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions undefined template in child template test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions undefined template in child template test n inflating var www craft craft app vendor twig twig test twig tests fixtures exceptions undefined trait test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions undefined trait test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures exceptions n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions array test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions array test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions array call test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions array call test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions binary test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions binary test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions bitwise test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions bitwise test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions comparison test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions comparison test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions divisibleby test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions divisibleby test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions dotdot test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions dotdot test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions ends with test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions ends with test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions grouping test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions grouping test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions literals test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions literals test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions magic call test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions magic call test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions matches test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions matches test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions method call test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions method call test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions negative numbers test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions negative numbers test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions operators as variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions operators as variables test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions postfix test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions postfix test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions sameas test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions sameas test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions starts with test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions starts with test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions strings test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions strings test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions ternary operator test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions ternary operator test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions ternary operator noelse test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions ternary operator noelse test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions ternary operator nothen test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions ternary operator nothen test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions two word operators as variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions two word operators as variables test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions unary test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions unary test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions unary macro arguments test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions unary macro arguments test n inflating var www craft craft app vendor twig twig test twig tests fixtures expressions unary precedence test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions unary precedence test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures expressions n inflating var www craft craft app vendor twig twig test twig tests fixtures filters abs test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters abs test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters batch test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters batch test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters batch float test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters batch float test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters batch with empty fill test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters batch with empty fill test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters batch with exact elements test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters batch with exact elements test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters batch with fill test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters batch with fill test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters batch with keys test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters batch with keys test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters batch with zero elements test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters batch with zero elements test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters convert encoding test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters convert encoding test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters date test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters date test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters date default format test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters date default format test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters date default format interval test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters date default format interval test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters date immutable test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters date immutable test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters date interval test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters date interval test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters date modify test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters date modify test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters date namedargs test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters date namedargs test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters default test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters default test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters dynamic filter test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters dynamic filter test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters escape test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters escape test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters escape html attr test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters escape html attr test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters escape non supported charset test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters escape non supported charset test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters first test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters first test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters force escape test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters force escape test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters format test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters format test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters join test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters join test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters json encode test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters json encode test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters last test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters last test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters length test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters length test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters length test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters length test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters merge test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters merge test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters number format test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters number format test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters number format default test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters number format default test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters replace test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters replace test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters replace invalid arg test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters replace invalid arg test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters reverse test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters reverse test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters round test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters round test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters slice test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters slice test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters sort test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters sort test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters special chars test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters special chars test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters split test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters split test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters split test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters split test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters trim test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters trim test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters urlencode test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters urlencode test n inflating var www craft craft app vendor twig twig test twig tests fixtures filters urlencode deprecated test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters urlencode deprecated test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures filters n inflating var www craft craft app vendor twig twig test twig tests fixtures functions attribute test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions attribute test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions block test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions block test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions constant test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions constant test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions cycle test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions cycle test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions date test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions date test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions date namedargs test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions date namedargs test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions dump test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions dump test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions dump array test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions dump array test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions dynamic function test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions dynamic function test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include assignment test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include assignment test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include autoescaping test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include autoescaping test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include expression test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include expression test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include ignore missing test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include ignore missing test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include missing test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include missing test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include missing nested test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include missing nested test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include sandbox test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include sandbox test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include sandbox disabling test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include sandbox disabling test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include sandbox disabling ignore missing test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include sandbox disabling ignore missing test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include template instance test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include template instance test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include templates as array test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include templates as array test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include with context test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include with context test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions include with variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include with variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions include n inflating var www craft craft app vendor twig twig test twig tests fixtures functions max test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions max test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions min test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions min test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions range test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions range test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions recursive block with inheritance test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions recursive block with inheritance test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions source test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions source test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions special chars test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions special chars test n inflating var www craft craft app vendor twig twig test twig tests fixtures functions template from string test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions template from string test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures functions n inflating var www craft craft app vendor twig twig test twig tests fixtures macros default values test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros default values test n inflating var www craft craft app vendor twig twig test twig tests fixtures macros nested calls test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros nested calls test n inflating var www craft craft app vendor twig twig test twig tests fixtures macros reserved variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros reserved variables test n inflating var www craft craft app vendor twig twig test twig tests fixtures macros simple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros simple test n inflating var www craft craft app vendor twig twig test twig tests fixtures macros varargs test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros varargs test n inflating var www craft craft app vendor twig twig test twig tests fixtures macros varargs argument test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros varargs argument test n inflating var www craft craft app vendor twig twig test twig tests fixtures macros with filters test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros with filters test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures macros n inflating var www craft craft app vendor twig twig test twig tests fixtures regression combined debug info test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures regression combined debug info test n inflating var www craft craft app vendor twig twig test twig tests fixtures regression empty token test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures regression empty token test n inflating var www craft craft app vendor twig twig test twig tests fixtures regression issue test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures regression issue test n inflating var www craft craft app vendor twig twig test twig tests fixtures regression multi word tests test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures regression multi word tests test n inflating var www craft craft app vendor twig twig test twig tests fixtures regression simple xml element test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures regression simple xml element test n inflating var www craft craft app vendor twig twig test twig tests fixtures regression strings like numbers test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures regression strings like numbers test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures regression n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape blocks test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape blocks test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape double escaping test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape double escaping test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape functions test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape functions test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape literal test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape literal test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape nested test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape nested test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape objects test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape objects test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape raw test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape raw test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape strategy legacy test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape strategy legacy test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape strategy test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape strategy test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape type test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape type test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape with filters test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape with filters test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape with filters arguments test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape with filters arguments test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape with pre escape filters test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape with pre escape filters test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags autoescape with preserves safety filters test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape with preserves safety filters test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags autoescape n inflating var www craft craft app vendor twig twig test twig tests fixtures tags block basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags block basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags block block unique name test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags block block unique name test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags block special chars test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags block special chars test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags block n inflating var www craft craft app vendor twig twig test twig tests fixtures tags embed basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags embed basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags embed error line test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags embed error line test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags embed multiple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags embed multiple test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags embed nested test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags embed nested test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags embed with extends test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags embed with extends test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags embed n inflating var www craft craft app vendor twig twig test twig tests fixtures tags filter basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags filter basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags filter json encode test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags filter json encode test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags filter multiple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags filter multiple test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags filter nested test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags filter nested test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags filter with for tag test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags filter with for tag test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags filter with if tag test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags filter with if tag test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags filter n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for condition test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for condition test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for context test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for context test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for else test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for else test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for inner variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for inner variables test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for keys test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for keys test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for keys and values test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for keys and values test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for loop context test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for loop context test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for loop context local test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for loop context local test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for loop not defined test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for loop not defined test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for loop not defined cond test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for loop not defined cond test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for nested else test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for nested else test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for objects test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for objects test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for objects countable test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for objects countable test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for recursive test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for recursive test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags for values test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for values test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags for n inflating var www craft craft app vendor twig twig test twig tests fixtures tags from test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags from test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags if basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags if basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags if expression test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags if expression test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags if n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include expression test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include expression test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include ignore missing test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include ignore missing test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include missing test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include missing test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include missing nested test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include missing nested test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include only test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include only test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include template instance test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include template instance test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include templates as array test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include templates as array test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags include with variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include with variables test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags include n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance block expr test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance block expr test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance block test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance block test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance conditional test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance conditional test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance dynamic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance dynamic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance empty test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance empty test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance extends as array test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance extends as array test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance extends as array with empty name test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance extends as array with empty name test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance extends as array with null name test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance extends as array with null name test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance multiple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance multiple test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance multiple dynamic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance multiple dynamic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance nested blocks test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance nested blocks test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance nested blocks parent only test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance nested blocks parent only test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance nested inheritance test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance nested inheritance test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance parent test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance parent test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance parent change test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance parent change test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance parent in a block test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance parent in a block test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance parent isolation test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance parent isolation test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance parent nested test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance parent nested test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance parent without extends test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance parent without extends test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance parent without extends but traits test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance parent without extends but traits test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance template instance test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance template instance test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags inheritance use test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance use test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags inheritance n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro endmacro name test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro endmacro name test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro external test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro external test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro from test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro from test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro from with reserved name test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro from with reserved name test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro global test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro global test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro import with reserved nam test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro import with reserved nam test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro reserved name test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro reserved name test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro self import test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro self import test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro special chars test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro special chars test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags macro super globals test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro super globals test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags macro n inflating var www craft craft app vendor twig twig test twig tests fixtures tags raw basic legacy test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags raw basic legacy test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags raw mixed usage with raw legacy test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags raw mixed usage with raw legacy test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags raw whitespace control legacy test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags raw whitespace control legacy test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags raw n inflating var www craft craft app vendor twig twig test twig tests fixtures tags sandbox not test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags sandbox not test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags sandbox not test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags sandbox not test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags sandbox simple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags sandbox simple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags sandbox n inflating var www craft craft app vendor twig twig test twig tests fixtures tags set basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags set basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags set capture empty test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags set capture empty test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags set capture test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags set capture test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags set expression test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags set expression test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags set n inflating var www craft craft app vendor twig twig test twig tests fixtures tags spaceless simple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags spaceless simple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags spaceless n inflating var www craft craft app vendor twig twig test twig tests fixtures tags special chars test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags special chars test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags trim block test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags trim block test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use aliases test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use aliases test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use deep test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use deep test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use deep empty test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use deep empty test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use inheritance test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use inheritance test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use multiple test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use multiple test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use multiple aliases test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use multiple aliases test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use parent block test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use parent block test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use parent test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use parent test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags use parent test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use parent test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags use n inflating var www craft craft app vendor twig twig test twig tests fixtures tags verbatim basic test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags verbatim basic test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags verbatim mixed usage with raw test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags verbatim mixed usage with raw test n inflating var www craft craft app vendor twig twig test twig tests fixtures tags verbatim whitespace control test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags verbatim whitespace control test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags verbatim n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tags n inflating var www craft craft app vendor twig twig test twig tests fixtures tests array test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests array test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests constant test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests constant test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests defined test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests defined test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests empty test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests empty test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests even test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests even test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests in test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests in test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests in with objects test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests in with objects test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests iterable test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests iterable test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests null coalesce test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests null coalesce test n inflating var www craft craft app vendor twig twig test twig tests fixtures tests odd test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests odd test n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures tests n inflating var www macosx craft craft app vendor twig twig test twig tests fixtures n inflating var www craft craft app vendor twig twig test twig tests integrationtest php n inflating var www macosx craft craft app vendor twig twig test twig tests integrationtest php n inflating var www craft craft app vendor twig twig test twig tests legacyfixtures test legacy test n inflating var www macosx craft craft app vendor twig twig test twig tests legacyfixtures test legacy test n inflating var www macosx craft craft app vendor twig twig test twig tests legacyfixtures n inflating var www craft craft app vendor twig twig test twig tests legacyintegrationtest php n inflating var www macosx craft craft app vendor twig twig test twig tests legacyintegrationtest php n inflating var www craft craft app vendor twig twig test twig tests lexertest php n inflating var www macosx craft craft app vendor twig twig test twig tests lexertest php n inflating var www craft craft app vendor twig twig test twig tests loader arraytest php n inflating var www macosx craft craft app vendor twig twig test twig tests loader arraytest php n inflating var www craft craft app vendor twig twig test twig tests loader chaintest php n inflating var www macosx craft craft app vendor twig twig test twig tests loader chaintest php n inflating var www craft craft app vendor twig twig test twig tests loader filesystemtest php n inflating var www macosx craft craft app vendor twig twig test twig tests loader filesystemtest php n inflating var www craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance empty parent html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance empty parent html twig n inflating var www craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance nonexistent parent html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance nonexistent parent html twig n inflating var www craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance null parent html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance null parent html twig n inflating var www craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance valid parent html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures inheritance array inheritance valid parent html twig n inflating var www craft craft app vendor twig twig test twig tests loader fixtures inheritance parent html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures inheritance parent html twig n inflating var www craft craft app vendor twig twig test twig tests loader fixtures inheritance spare parent html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures inheritance spare parent html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures inheritance n inflating var www craft craft app vendor twig twig test twig tests loader fixtures named index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named n inflating var www craft craft app vendor twig twig test twig tests loader fixtures named bis index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named bis index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named bis n inflating var www craft craft app vendor twig twig test twig tests loader fixtures named final index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named final index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named final n inflating var www craft craft app vendor twig twig test twig tests loader fixtures named quater named absolute html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named quater named absolute html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named quater n inflating var www craft craft app vendor twig twig test twig tests loader fixtures named ter index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named ter index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures named ter n inflating var www craft craft app vendor twig twig test twig tests loader fixtures normal index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal n inflating var www craft craft app vendor twig twig test twig tests loader fixtures normal bis index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal bis index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal bis n inflating var www craft craft app vendor twig twig test twig tests loader fixtures normal final index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal final index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal final n inflating var www craft craft app vendor twig twig test twig tests loader fixtures normal ter index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal ter index html n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures normal ter n inflating var www craft craft app vendor twig twig test twig tests loader fixtures themes blocks html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures themes blocks html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures themes n inflating var www craft craft app vendor twig twig test twig tests loader fixtures themes blocks html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures themes blocks html twig n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures themes n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures themes n inflating var www macosx craft craft app vendor twig twig test twig tests loader fixtures n inflating var www macosx craft craft app vendor twig twig test twig tests loader n inflating var www craft craft app vendor twig twig test twig tests nativeextensiontest php n inflating var www macosx craft craft app vendor twig twig test twig tests nativeextensiontest php n inflating var www craft craft app vendor twig twig test twig tests node autoescapetest php n inflating var www macosx craft craft app vendor twig twig test twig tests node autoescapetest php n inflating var www craft craft app vendor twig twig test twig tests node blockreferencetest php n inflating var www macosx craft craft app vendor twig twig test twig tests node blockreferencetest php n inflating var www craft craft app vendor twig twig test twig tests node blocktest php n inflating var www macosx craft craft app vendor twig twig test twig tests node blocktest php n inflating var www craft craft app vendor twig twig test twig tests node dotest php n inflating var www macosx craft craft app vendor twig twig test twig tests node dotest php n inflating var www craft craft app vendor twig twig test twig tests node expression arraytest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression arraytest php n inflating var www craft craft app vendor twig twig test twig tests node expression assignnametest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression assignnametest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary addtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary addtest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary andtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary andtest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary concattest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary concattest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary divtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary divtest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary floordivtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary floordivtest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary modtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary modtest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary multest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary multest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary ortest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary ortest php n inflating var www craft craft app vendor twig twig test twig tests node expression binary subtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary subtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression binary n inflating var www craft craft app vendor twig twig test twig tests node expression calltest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression calltest php n inflating var www craft craft app vendor twig twig test twig tests node expression conditionaltest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression conditionaltest php n inflating var www craft craft app vendor twig twig test twig tests node expression constanttest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression constanttest php n inflating var www craft craft app vendor twig twig test twig tests node expression filtertest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression filtertest php n inflating var www craft craft app vendor twig twig test twig tests node expression functiontest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression functiontest php n inflating var www craft craft app vendor twig twig test twig tests node expression getattrtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression getattrtest php n inflating var www craft craft app vendor twig twig test twig tests node expression nametest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression nametest php n inflating var www craft craft app vendor twig twig test twig tests node expression parenttest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression parenttest php n inflating var www craft craft app vendor twig twig test twig tests node expression filterinclude php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression filterinclude php n inflating var www craft craft app vendor twig twig test twig tests node expression functioninclude php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression functioninclude php n inflating var www craft craft app vendor twig twig test twig tests node expression testinclude php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression testinclude php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression n inflating var www craft craft app vendor twig twig test twig tests node expression testtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression testtest php n inflating var www craft craft app vendor twig twig test twig tests node expression unary negtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression unary negtest php n inflating var www craft craft app vendor twig twig test twig tests node expression unary nottest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression unary nottest php n inflating var www craft craft app vendor twig twig test twig tests node expression unary postest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression unary postest php n inflating var www macosx craft craft app vendor twig twig test twig tests node expression unary n inflating var www macosx craft craft app vendor twig twig test twig tests node expression n inflating var www craft craft app vendor twig twig test twig tests node fortest php n inflating var www macosx craft craft app vendor twig twig test twig tests node fortest php n inflating var www craft craft app vendor twig twig test twig tests node iftest php n inflating var www macosx craft craft app vendor twig twig test twig tests node iftest php n inflating var www craft craft app vendor twig twig test twig tests node importtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node importtest php n inflating var www craft craft app vendor twig twig test twig tests node includetest php n inflating var www macosx craft craft app vendor twig twig test twig tests node includetest php n inflating var www craft craft app vendor twig twig test twig tests node macrotest php n inflating var www macosx craft craft app vendor twig twig test twig tests node macrotest php n inflating var www craft craft app vendor twig twig test twig tests node moduletest php n inflating var www macosx craft craft app vendor twig twig test twig tests node moduletest php n inflating var www craft craft app vendor twig twig test twig tests node printtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node printtest php n inflating var www craft craft app vendor twig twig test twig tests node sandboxedprinttest php n inflating var www macosx craft craft app vendor twig twig test twig tests node sandboxedprinttest php n inflating var www craft craft app vendor twig twig test twig tests node sandboxtest php n inflating var www macosx craft craft app vendor twig twig test twig tests node sandboxtest php n inflating var www craft craft app vendor twig twig test twig tests node settest php n inflating var www macosx craft craft app vendor twig twig test twig tests node settest php n inflating var www craft craft app vendor twig twig test twig tests node spacelesstest php n inflating var www macosx craft craft app vendor twig twig test twig tests node spacelesstest php n inflating var www craft craft app vendor twig twig test twig tests node texttest php n inflating var www macosx craft craft app vendor twig twig test twig tests node texttest php n inflating var www macosx craft craft app vendor twig twig test twig tests node n inflating var www craft craft app vendor twig twig test twig tests nodevisitor optimizertest php n inflating var www macosx craft craft app vendor twig twig test twig tests nodevisitor optimizertest php n inflating var www macosx craft craft app vendor twig twig test twig tests nodevisitor n inflating var www craft craft app vendor twig twig test twig tests parsertest php n inflating var www macosx craft craft app vendor twig twig test twig tests parsertest php n inflating var www craft craft app vendor twig twig test twig tests profiler dumper abstracttest php n inflating var www macosx craft craft app vendor twig twig test twig tests profiler dumper abstracttest php n inflating var www craft craft app vendor twig twig test twig tests profiler dumper blackfiretest php n inflating var www macosx craft craft app vendor twig twig test twig tests profiler dumper blackfiretest php n inflating var www craft craft app vendor twig twig test twig tests profiler dumper htmltest php n inflating var www macosx craft craft app vendor twig twig test twig tests profiler dumper htmltest php n inflating var www craft craft app vendor twig twig test twig tests profiler dumper texttest php n inflating var www macosx craft craft app vendor twig twig test twig tests profiler dumper texttest php n inflating var www macosx craft craft app vendor twig twig test twig tests profiler dumper n inflating var www craft craft app vendor twig twig test twig tests profiler profiletest php n inflating var www macosx craft craft app vendor twig twig test twig tests profiler profiletest php n inflating var www macosx craft craft app vendor twig twig test twig tests profiler n inflating var www craft craft app vendor twig twig test twig tests templatetest php n inflating var www macosx craft craft app vendor twig twig test twig tests templatetest php n inflating var www craft craft app vendor twig twig test twig tests tokenstreamtest php n inflating var www macosx craft craft app vendor twig twig test twig tests tokenstreamtest php n inflating var www macosx craft craft app vendor twig twig test twig tests n inflating var www macosx craft craft app vendor twig twig test twig n inflating var www macosx craft craft app vendor twig twig test n inflating var www macosx craft craft app vendor twig twig n inflating var www macosx craft craft app vendor twig n inflating var www macosx craft craft app vendor n inflating var www craft craft app widgets basewidget php n inflating var www macosx craft craft app widgets basewidget php n inflating var www craft craft app widgets feedwidget php n inflating var www macosx craft craft app widgets feedwidget php n inflating var www craft craft app widgets gethelpwidget php n inflating var www macosx craft craft app widgets gethelpwidget php n inflating var www craft craft app widgets iwidget php n inflating var www macosx craft craft app widgets iwidget php n inflating var www craft craft app widgets newuserswidget php n inflating var www macosx craft craft app widgets newuserswidget php n inflating var www craft craft app widgets quickpostwidget php n inflating var www macosx craft craft app widgets quickpostwidget php n inflating var www craft craft app widgets recententrieswidget php n inflating var www macosx craft craft app widgets recententrieswidget php n inflating var www craft craft app widgets updateswidget php n inflating var www macosx craft craft app widgets updateswidget php n inflating var www macosx craft craft app widgets n inflating var www macosx craft craft app n inflating var www craft craft config htaccess n inflating var www macosx craft craft config htaccess n inflating var www craft craft config db php n inflating var www macosx craft craft config db php n inflating var www craft craft config general php n inflating var www macosx craft craft config general php n inflating var www craft craft config redactor simple json n inflating var www macosx craft craft config redactor simple json n inflating var www craft craft config redactor standard json n inflating var www macosx craft craft config redactor standard json n inflating var www macosx craft craft config redactor n inflating var www craft craft config routes php n inflating var www macosx craft craft config routes php n inflating var www craft craft config web config n inflating var www macosx craft craft config web config n inflating var www macosx craft craft config n extracting var www craft craft plugins gitignore n inflating var www macosx craft craft plugins gitignore n inflating var www craft craft plugins htaccess n inflating var www macosx craft craft plugins htaccess n inflating var www craft craft plugins web config n inflating var www macosx craft craft plugins web config n inflating var www macosx craft craft plugins n extracting var www craft craft storage gitignore n inflating var www macosx craft craft storage gitignore n inflating var www craft craft storage htaccess n inflating var www macosx craft craft storage htaccess n inflating var www craft craft storage web config n inflating var www macosx craft craft storage web config n inflating var www macosx craft craft storage n inflating var www craft craft templates htaccess n inflating var www macosx craft craft templates htaccess n inflating var www craft craft templates html n inflating var www macosx craft craft templates html n inflating var www craft craft templates layout html n inflating var www macosx craft craft templates layout html n inflating var www craft craft templates index html n inflating var www macosx craft craft templates index html n inflating var www craft craft templates news entry html n inflating var www macosx craft craft templates news entry html n inflating var www craft craft templates news index html n inflating var www macosx craft craft templates news index html n inflating var www macosx craft craft templates news n inflating var www craft craft templates web config n inflating var www macosx craft craft templates web config n inflating var www macosx craft craft templates n inflating var www craft craft web config n inflating var www macosx craft craft web config n inflating var www macosx craft craft n inflating var www craft public htaccess n inflating var www macosx craft public htaccess n inflating var www craft public index php n inflating var www macosx craft public index php n inflating var www craft public robots txt n inflating var www macosx craft public robots txt n inflating var www craft public web config n inflating var www macosx craft public web config n inflating var www macosx craft public n inflating var www craft readme txt n inflating var www macosx craft readme txt n inflating var www macosx craft n rc gid group www data handler ziparchive invocation module args backup null content null copy true creates var www craft delimiter null dest var www directory mode null exclude extra opts follow false force null group www data keep newer false list files false mode null original basename latest zip accept license yes owner www data regexp null remote src true selevel null serole null setype null seuser null src unsafe writes null validate certs false mode owner www data size src tmp ansible zurjor latest zip accept license yes state directory uid ,1 961,4704706600.0,IssuesEvent,2016-10-13 12:28:03,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Bad documentation example in gce module,affects_2.2 cloud docs_report gce waiting_on_maintainer,"This is the example I'm following, but I end up with different results. You can see below most of the playbook works out fine except for the part when Ansible comes in to do it's magic. For some reason the host cannot be found. ┌─┤[james@xps13-nocentre-net:~/workspace/github/ansible][3.18.8-201.fc21.x86_64 10:51:10]├────── ── ─ └[feature/elk] $ ansible-playbook playbooks/plays/gce/elk.yml -i inventory/hosts_production ``` PLAY [Google Cloud Procurement] *********************************************** TASK: [gce] ******************************************************************* ok: [localhost -> 127.0.0.1] TASK: [set_fact public_ip={{gce.instance_data[0].public_ip}}] ***************** ok: [localhost] TASK: [set_fact private_ip={{gce.instance_data[0].private_ip}}] *************** ok: [localhost] TASK: [set_fact ansible_ssh_host={{public_ip}}] ******************************* ok: [localhost] TASK: [debug msg=""{{gce}}""] *************************************************** ok: [localhost] => { ""msg"": ""{u'instance_data': [{u'status': u'RUNNING', u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'network': u'default'}], u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', 'invocation': {'module_name': u'gce', 'module_args': ''}}"" } TASK: [Wait for SSH to come up] *********************************************** ok: [localhost -> 127.0.0.1] => (item={u'status': u'RUNNING', u'network': u'default', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'name': u'elk-stage-uscentral1a-kernelfire-com'}) TASK: [gce_pd] **************************************************************** ok: [localhost -> 127.0.0.1] TASK: [set_fact gce_pd_name={{gce_pd.name}}] ********************************** ok: [localhost] TASK: [debug msg=""{{gce_pd}}""] ************************************************ ok: [localhost] => { ""msg"": ""{u'size_gb': 100, u'name': u'elk-data-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', u'attached_to_instance': u'elk-stage-uscentral1a-kernelfire-com', 'invocation': {'module_name': u'gce_pd', 'module_args': ''}, u'disk_type': u'pd-ssd', u'attached_mode': u'READ_WRITE'}"" } TASK: [debug msg=""{{gce_pd_name}}""] ******************************************* ok: [localhost] => { ""msg"": ""elk-data-stage-uscentral1a-kernelfire-com"" } PLAY [Ansible Provisioning] *************************************************** skipping: no hosts matched PLAY RECAP ******************************************************************** localhost : ok=10 changed=0 unreachable=0 failed=0 ``` ============= PLAYBOOK ============== ``` - name: Google Cloud Procurement hosts: localhost connection: local gather_facts: false tasks: - local_action: module: gce image: centos-6 machine_type: n1-standard-4 name: elk-stage-uscentral1a-kernelfire-com persistent_boot_disk: true project_id: cf-stage zone: us-central1-a register: gce - set_fact: public_ip={{gce.instance_data[0].public_ip}} - set_fact: private_ip={{gce.instance_data[0].private_ip}} - set_fact: ansible_ssh_host={{public_ip}} - debug: msg=""{{gce}}"" - name: Wait for SSH to come up local_action: wait_for host={{item.public_ip}} port=22 delay=10 timeout=60 state=started with_items: ""{{gce.instance_data}}"" - local_action: module: gce_pd disk_type: pd-ssd instance_name: ""{{gce.instance_data[0].name}}"" mode: READ_WRITE name: elk-data-stage-uscentral1a-kernelfire-com project_id: cf-stage size_gb: 100 zone: us-central1-a register: gce_pd - set_fact: gce_pd_name={{gce_pd.name}} - debug: msg=""{{gce_pd}}"" - debug: msg=""{{gce_pd_name}}"" - name: Ansible Provisioning hosts: launched sudo: true roles: - { role: elasticsearch, tags: [ 'extended', 'elasticsearch' ] } - { role: redis, tags: [ 'extended', 'redis' ] } - { role: logstash, tags: [ 'extended', 'logstash' ] } - { role: kibana, tags: [ 'extended', 'kibana' ] } ``` Any ideas?",True,"Bad documentation example in gce module - This is the example I'm following, but I end up with different results. You can see below most of the playbook works out fine except for the part when Ansible comes in to do it's magic. For some reason the host cannot be found. ┌─┤[james@xps13-nocentre-net:~/workspace/github/ansible][3.18.8-201.fc21.x86_64 10:51:10]├────── ── ─ └[feature/elk] $ ansible-playbook playbooks/plays/gce/elk.yml -i inventory/hosts_production ``` PLAY [Google Cloud Procurement] *********************************************** TASK: [gce] ******************************************************************* ok: [localhost -> 127.0.0.1] TASK: [set_fact public_ip={{gce.instance_data[0].public_ip}}] ***************** ok: [localhost] TASK: [set_fact private_ip={{gce.instance_data[0].private_ip}}] *************** ok: [localhost] TASK: [set_fact ansible_ssh_host={{public_ip}}] ******************************* ok: [localhost] TASK: [debug msg=""{{gce}}""] *************************************************** ok: [localhost] => { ""msg"": ""{u'instance_data': [{u'status': u'RUNNING', u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'network': u'default'}], u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', 'invocation': {'module_name': u'gce', 'module_args': ''}}"" } TASK: [Wait for SSH to come up] *********************************************** ok: [localhost -> 127.0.0.1] => (item={u'status': u'RUNNING', u'network': u'default', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'name': u'elk-stage-uscentral1a-kernelfire-com'}) TASK: [gce_pd] **************************************************************** ok: [localhost -> 127.0.0.1] TASK: [set_fact gce_pd_name={{gce_pd.name}}] ********************************** ok: [localhost] TASK: [debug msg=""{{gce_pd}}""] ************************************************ ok: [localhost] => { ""msg"": ""{u'size_gb': 100, u'name': u'elk-data-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', u'attached_to_instance': u'elk-stage-uscentral1a-kernelfire-com', 'invocation': {'module_name': u'gce_pd', 'module_args': ''}, u'disk_type': u'pd-ssd', u'attached_mode': u'READ_WRITE'}"" } TASK: [debug msg=""{{gce_pd_name}}""] ******************************************* ok: [localhost] => { ""msg"": ""elk-data-stage-uscentral1a-kernelfire-com"" } PLAY [Ansible Provisioning] *************************************************** skipping: no hosts matched PLAY RECAP ******************************************************************** localhost : ok=10 changed=0 unreachable=0 failed=0 ``` ============= PLAYBOOK ============== ``` - name: Google Cloud Procurement hosts: localhost connection: local gather_facts: false tasks: - local_action: module: gce image: centos-6 machine_type: n1-standard-4 name: elk-stage-uscentral1a-kernelfire-com persistent_boot_disk: true project_id: cf-stage zone: us-central1-a register: gce - set_fact: public_ip={{gce.instance_data[0].public_ip}} - set_fact: private_ip={{gce.instance_data[0].private_ip}} - set_fact: ansible_ssh_host={{public_ip}} - debug: msg=""{{gce}}"" - name: Wait for SSH to come up local_action: wait_for host={{item.public_ip}} port=22 delay=10 timeout=60 state=started with_items: ""{{gce.instance_data}}"" - local_action: module: gce_pd disk_type: pd-ssd instance_name: ""{{gce.instance_data[0].name}}"" mode: READ_WRITE name: elk-data-stage-uscentral1a-kernelfire-com project_id: cf-stage size_gb: 100 zone: us-central1-a register: gce_pd - set_fact: gce_pd_name={{gce_pd.name}} - debug: msg=""{{gce_pd}}"" - debug: msg=""{{gce_pd_name}}"" - name: Ansible Provisioning hosts: launched sudo: true roles: - { role: elasticsearch, tags: [ 'extended', 'elasticsearch' ] } - { role: redis, tags: [ 'extended', 'redis' ] } - { role: logstash, tags: [ 'extended', 'logstash' ] } - { role: kibana, tags: [ 'extended', 'kibana' ] } ``` Any ideas?",1,bad documentation example in gce module this is the example i m following but i end up with different results you can see below most of the playbook works out fine except for the part when ansible comes in to do it s magic for some reason the host cannot be found ┌─┤ ├────── ── ─ └ ansible playbook playbooks plays gce elk yml i inventory hosts production play task ok task public ip ok task private ip ok task ok task ok msg u instance data u image none u disks u public ip u u private ip u u machine type u standard u metadata u network u default u name u elk stage kernelfire com u zone u us a u changed false u state u present invocation module name u gce module args task ok item u status u running u network u default u zone u us a u tags u image none u disks u public ip u u private ip u u machine type u standard u metadata u name u elk stage kernelfire com task ok task ok task ok msg u size gb u name u elk data stage kernelfire com u zone u us a u changed false u state u present u attached to instance u elk stage kernelfire com invocation module name u gce pd module args u disk type u pd ssd u attached mode u read write task ok msg elk data stage kernelfire com play skipping no hosts matched play recap localhost ok changed unreachable failed playbook name google cloud procurement hosts localhost connection local gather facts false tasks local action module gce image centos machine type standard name elk stage kernelfire com persistent boot disk true project id cf stage zone us a register gce set fact public ip gce instance data public ip set fact private ip gce instance data private ip set fact ansible ssh host public ip debug msg gce name wait for ssh to come up local action wait for host item public ip port delay timeout state started with items gce instance data local action module gce pd disk type pd ssd instance name gce instance data name mode read write name elk data stage kernelfire com project id cf stage size gb zone us a register gce pd set fact gce pd name gce pd name debug msg gce pd debug msg gce pd name name ansible provisioning hosts launched sudo true roles role elasticsearch tags role redis tags role logstash tags role kibana tags any ideas ,1 1200,5133075979.0,IssuesEvent,2017-01-11 01:41:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Deleting elasticache cluster on redis throws TypeError,affects_2.0 aws bug_report cloud P2 waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Component Name: elasticache module ##### Ansible Version: ansible 2.0.0.2 ##### Ansible Configuration: `any_errors_fatal = True` ##### Environment: redis on aws elasticache ##### Summary: Errors on deleting cluster. ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 2727, in main() File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 562, in main module.exit_json(**facts_result) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 2099, in exit_json kwargs = remove_values(kwargs, self.no_log_values) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 980, in remove_values return [remove_values(elem, no_log_strings) for elem in value] File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 991, in remove_values raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) TypeError: Value of unknown type: , fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""elasticache""}, ""parsed"": false} ``` ##### Steps To Reproduce: Create cluster with elasticache module. Attempt to delete cluster with `state: absent`. If it helps, the cluster name has a hyphen. e.g. `abc-redis` ``` - name: launch elasticache instance elasticache: cache_engine_version: 2.8.24 cache_port: 6379 cache_security_groups: [] cache_subnet_group: --- engine: redis hard_modify: false name: abc-redis node_type: cache.t2.micro num_nodes: 1 region: ---- security_group_ids: ---- state: present wait: true - name: terminate cluster elasticache: name: abc-redis region: --- state: absent ``` ##### Expected Results: Expect cluster to be deleted without errors. ##### Actual Results: As far as I can tell the cluster is gone from the AWS console, however there shouldn't be any errors thrown. ",True,"Deleting elasticache cluster on redis throws TypeError - ##### Issue Type: - Bug Report ##### Component Name: elasticache module ##### Ansible Version: ansible 2.0.0.2 ##### Ansible Configuration: `any_errors_fatal = True` ##### Environment: redis on aws elasticache ##### Summary: Errors on deleting cluster. ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 2727, in main() File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 562, in main module.exit_json(**facts_result) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 2099, in exit_json kwargs = remove_values(kwargs, self.no_log_values) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in remove_values return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 982, in return dict((k, remove_values(v, no_log_strings)) for k, v in value.items()) File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 980, in remove_values return [remove_values(elem, no_log_strings) for elem in value] File ""/Users/james/.ansible/tmp/ansible-tmp-1455652399.45-51638225824520/elasticache"", line 991, in remove_values raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) TypeError: Value of unknown type: , fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""elasticache""}, ""parsed"": false} ``` ##### Steps To Reproduce: Create cluster with elasticache module. Attempt to delete cluster with `state: absent`. If it helps, the cluster name has a hyphen. e.g. `abc-redis` ``` - name: launch elasticache instance elasticache: cache_engine_version: 2.8.24 cache_port: 6379 cache_security_groups: [] cache_subnet_group: --- engine: redis hard_modify: false name: abc-redis node_type: cache.t2.micro num_nodes: 1 region: ---- security_group_ids: ---- state: present wait: true - name: terminate cluster elasticache: name: abc-redis region: --- state: absent ``` ##### Expected Results: Expect cluster to be deleted without errors. ##### Actual Results: As far as I can tell the cluster is gone from the AWS console, however there shouldn't be any errors thrown. ",1,deleting elasticache cluster on redis throws typeerror issue type bug report component name elasticache module ansible version ansible ansible configuration any errors fatal true environment redis on aws elasticache summary errors on deleting cluster an exception occurred during task execution the full traceback is traceback most recent call last file users james ansible tmp ansible tmp elasticache line in main file users james ansible tmp ansible tmp elasticache line in main module exit json facts result file users james ansible tmp ansible tmp elasticache line in exit json kwargs remove values kwargs self no log values file users james ansible tmp ansible tmp elasticache line in remove values return dict k remove values v no log strings for k v in value items file users james ansible tmp ansible tmp elasticache line in return dict k remove values v no log strings for k v in value items file users james ansible tmp ansible tmp elasticache line in remove values return dict k remove values v no log strings for k v in value items file users james ansible tmp ansible tmp elasticache line in return dict k remove values v no log strings for k v in value items file users james ansible tmp ansible tmp elasticache line in remove values return dict k remove values v no log strings for k v in value items file users james ansible tmp ansible tmp elasticache line in return dict k remove values v no log strings for k v in value items file users james ansible tmp ansible tmp elasticache line in remove values return file users james ansible tmp ansible tmp elasticache line in remove values raise typeerror value of unknown type s s type value value typeerror value of unknown type fatal failed changed false failed true invocation module name elasticache parsed false steps to reproduce create cluster with elasticache module attempt to delete cluster with state absent if it helps the cluster name has a hyphen e g abc redis name launch elasticache instance elasticache cache engine version cache port cache security groups cache subnet group engine redis hard modify false name abc redis node type cache micro num nodes region security group ids state present wait true name terminate cluster elasticache name abc redis region state absent expected results expect cluster to be deleted without errors actual results as far as i can tell the cluster is gone from the aws console however there shouldn t be any errors thrown ,1 1149,5008015815.0,IssuesEvent,2016-12-12 18:18:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Validating configuration files that include files with relative paths,affects_2.2 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - template - copy ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When you validate configuration files that include additional files with relative paths, the validation fails bacause those included files are not found. ##### STEPS TO REPRODUCE For example, I have an nginx configuration file called `/etc/nginx/includes.d/php.conf` that includes a file called `fastcgi_params` which is inside the `/etc/nginx` path and nginx always executes from this path so everything works fine. However, when I try to validate the configuration, I get this: ``` TASK [nginx : Add nginx configuration] ***************************************** fatal: [test]: FAILED! => {""changed"": true, ""exit_status"": 1, ""failed"": true, ""msg"": ""failed to validate"", ""stderr"": ""nginx: [emerg] open() \""/root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/fastcgi_params\"" failed (2: No such file or directory) in /etc/nginx/includes.d/php.conf:19\nnginx: configuration file /root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/source test failed\n"", ""stdout"": """", ""stdout_lines"": []} ``` It would be great if you could provide an additional option for validation, like `validate_cwd` so the validation process would change to this directory before the test. In this case, setting `validate_cwd` to `/etc/nginx` should make the validation pass. Of course, the workaround is to always use absolute paths, but I think this might be a handy option to have.",True,"Validating configuration files that include files with relative paths - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - template - copy ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When you validate configuration files that include additional files with relative paths, the validation fails bacause those included files are not found. ##### STEPS TO REPRODUCE For example, I have an nginx configuration file called `/etc/nginx/includes.d/php.conf` that includes a file called `fastcgi_params` which is inside the `/etc/nginx` path and nginx always executes from this path so everything works fine. However, when I try to validate the configuration, I get this: ``` TASK [nginx : Add nginx configuration] ***************************************** fatal: [test]: FAILED! => {""changed"": true, ""exit_status"": 1, ""failed"": true, ""msg"": ""failed to validate"", ""stderr"": ""nginx: [emerg] open() \""/root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/fastcgi_params\"" failed (2: No such file or directory) in /etc/nginx/includes.d/php.conf:19\nnginx: configuration file /root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/source test failed\n"", ""stdout"": """", ""stdout_lines"": []} ``` It would be great if you could provide an additional option for validation, like `validate_cwd` so the validation process would change to this directory before the test. In this case, setting `validate_cwd` to `/etc/nginx` should make the validation pass. Of course, the workaround is to always use absolute paths, but I think this might be a handy option to have.",1,validating configuration files that include files with relative paths issue type feature idea component name template copy ansible version ansible os environment n a summary when you validate configuration files that include additional files with relative paths the validation fails bacause those included files are not found steps to reproduce for example i have an nginx configuration file called etc nginx includes d php conf that includes a file called fastcgi params which is inside the etc nginx path and nginx always executes from this path so everything works fine however when i try to validate the configuration i get this task fatal failed changed true exit status failed true msg failed to validate stderr nginx open root ansible tmp ansible tmp fastcgi params failed no such file or directory in etc nginx includes d php conf nnginx configuration file root ansible tmp ansible tmp source test failed n stdout stdout lines it would be great if you could provide an additional option for validation like validate cwd so the validation process would change to this directory before the test in this case setting validate cwd to etc nginx should make the validation pass of course the workaround is to always use absolute paths but i think this might be a handy option to have ,1 1599,6572380949.0,IssuesEvent,2017-09-11 01:52:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible 2.0.0.2 - aws ec2_group - rules_egress not working ,affects_2.0 aws bug_report cloud waiting_on_maintainer,"## Issue Type Bug Report ## Component Name ec2_group ## Ansible Version 2.0.0.2 ## Environment Ansible 2.0.0.2 Ubuntu 14.03 AWS ## Summary I am trying to apply outbound firewall rules to AWS using the rules_egress: However, when I log into my AWS account and look at the winrdp security group, all I see are the inbound rules and no outbound rules. ### dev-environment.yml ``` # security groups to be created security_groups: - name: winrdp desc: the security group for the winrdp server rules: - proto: tcp from_port: 5986 to_port: 5986 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 3389 to_port: 3389 cidr_ip: 0.0.0.0/0 rules_egress: - proto: all cidr_ip: 0.0.0.0/0 ``` ### playbook ``` --- - name: Provision ec2 instances based on the environment hosts: localhost connection: local gather_facts: False vars_files: - vars/dev-environment.yml - vars/aws-creds.yml tasks: - name: Create required security groups ec2_group: name: ""{{ item.name }}"" description: ""{{ item.desc }}"" rules: ""{{ item.rules }}"" region: ""{{ ec2_region }}"" ec2_access_key: ""{{ ec2_access_key }}"" ec2_secret_key: ""{{ ec2_secret_key }}"" with_items: security_groups ``` ![screenshot from 2016-02-16 20 31 24](https://cloud.githubusercontent.com/assets/6406166/13097103/63c440a8-d4ec-11e5-8769-2ecc6f1c651e.png) ",True,"ansible 2.0.0.2 - aws ec2_group - rules_egress not working - ## Issue Type Bug Report ## Component Name ec2_group ## Ansible Version 2.0.0.2 ## Environment Ansible 2.0.0.2 Ubuntu 14.03 AWS ## Summary I am trying to apply outbound firewall rules to AWS using the rules_egress: However, when I log into my AWS account and look at the winrdp security group, all I see are the inbound rules and no outbound rules. ### dev-environment.yml ``` # security groups to be created security_groups: - name: winrdp desc: the security group for the winrdp server rules: - proto: tcp from_port: 5986 to_port: 5986 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 3389 to_port: 3389 cidr_ip: 0.0.0.0/0 rules_egress: - proto: all cidr_ip: 0.0.0.0/0 ``` ### playbook ``` --- - name: Provision ec2 instances based on the environment hosts: localhost connection: local gather_facts: False vars_files: - vars/dev-environment.yml - vars/aws-creds.yml tasks: - name: Create required security groups ec2_group: name: ""{{ item.name }}"" description: ""{{ item.desc }}"" rules: ""{{ item.rules }}"" region: ""{{ ec2_region }}"" ec2_access_key: ""{{ ec2_access_key }}"" ec2_secret_key: ""{{ ec2_secret_key }}"" with_items: security_groups ``` ![screenshot from 2016-02-16 20 31 24](https://cloud.githubusercontent.com/assets/6406166/13097103/63c440a8-d4ec-11e5-8769-2ecc6f1c651e.png) ",1,ansible aws group rules egress not working issue type bug report component name group ansible version environment ansible ubuntu aws summary i am trying to apply outbound firewall rules to aws using the rules egress however when i log into my aws account and look at the winrdp security group all i see are the inbound rules and no outbound rules dev environment yml security groups to be created security groups name winrdp desc the security group for the winrdp server rules proto tcp from port to port cidr ip proto tcp from port to port cidr ip rules egress proto all cidr ip playbook name provision instances based on the environment hosts localhost connection local gather facts false vars files vars dev environment yml vars aws creds yml tasks name create required security groups group name item name description item desc rules item rules region region access key access key secret key secret key with items security groups ,1 1764,6575021315.0,IssuesEvent,2017-09-11 14:48:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,interface with vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module `setup` in a playbook as `gather_facts: yes` ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ansible running from centos 2.6.32-573.22.1.el6.x86_64 managing CheckPoint Gaia (RedHat based) 2.6.18-92cpx86_64 ##### SUMMARY An interface with a vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN so it cannot be used as variable. ##### STEPS TO REPRODUCE There is an vlan interface in linux ``` # ip a ... eth2-01.100@eth2-01: mtu 1500 qdisc noqueue link/ether 00:1c:7f:65:91:d4 brd ff:ff:ff:ff:ff:ff inet 10.35.192.12/28 brd 10.35.192.15 scope global eth2-01.100 ... ``` ansible -m setup returns interface with vlan like this ``` ""ansible_eth2_01.100"": { ""active"": true, ""device"": ""eth2-01.100"", ""ipv4"": { ""address"": ""10.35.192.12"", ""broadcast"": ""10.35.192.15"", ""netmask"": ""255.255.255.240"", ""network"": ""10.35.192.0"" }, ""macaddress"": ""00:1c:7f:65:91:d4"", ""mtu"": 1500, ""promisc"": false, ""type"": ""ether"" }, ``` I want to use this fact in playbook as variable ``` {{ ansible_eth2_01.100.ipv4.address }} ``` but I got an error: `Error, in the future this will be a fatal error.: 'dict' object has no element 100.` ##### EXPECTED RESULTS Following this issue https://github.com/ansible/ansible/issues/6879 I believe that this should be fixed by changing `ansible_eth2_01.100` to `ansible_eth2_01_100` ##### ACTUAL RESULTS ``` Error, in the future this will be a fatal error.: 'dict' object has no element 100 ``` ",True,"interface with vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module `setup` in a playbook as `gather_facts: yes` ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ansible running from centos 2.6.32-573.22.1.el6.x86_64 managing CheckPoint Gaia (RedHat based) 2.6.18-92cpx86_64 ##### SUMMARY An interface with a vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN so it cannot be used as variable. ##### STEPS TO REPRODUCE There is an vlan interface in linux ``` # ip a ... eth2-01.100@eth2-01: mtu 1500 qdisc noqueue link/ether 00:1c:7f:65:91:d4 brd ff:ff:ff:ff:ff:ff inet 10.35.192.12/28 brd 10.35.192.15 scope global eth2-01.100 ... ``` ansible -m setup returns interface with vlan like this ``` ""ansible_eth2_01.100"": { ""active"": true, ""device"": ""eth2-01.100"", ""ipv4"": { ""address"": ""10.35.192.12"", ""broadcast"": ""10.35.192.15"", ""netmask"": ""255.255.255.240"", ""network"": ""10.35.192.0"" }, ""macaddress"": ""00:1c:7f:65:91:d4"", ""mtu"": 1500, ""promisc"": false, ""type"": ""ether"" }, ``` I want to use this fact in playbook as variable ``` {{ ansible_eth2_01.100.ipv4.address }} ``` but I got an error: `Error, in the future this will be a fatal error.: 'dict' object has no element 100.` ##### EXPECTED RESULTS Following this issue https://github.com/ansible/ansible/issues/6879 I believe that this should be fixed by changing `ansible_eth2_01.100` to `ansible_eth2_01_100` ##### ACTUAL RESULTS ``` Error, in the future this will be a fatal error.: 'dict' object has no element 100 ``` ",1,interface with vlan is returned with as interface vlan instead of interface vlan issue type bug report component name module setup in a playbook as gather facts yes ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible running from centos managing checkpoint gaia redhat based summary an interface with a vlan is returned with as interface vlan instead of interface vlan so it cannot be used as variable steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used there is an vlan interface in linux ip a mtu qdisc noqueue link ether brd ff ff ff ff ff ff inet brd scope global ansible m setup returns interface with vlan like this ansible active true device address broadcast netmask network macaddress mtu promisc false type ether i want to use this fact in playbook as variable ansible address but i got an error error in the future this will be a fatal error dict object has no element expected results following this issue i believe that this should be fixed by changing ansible to ansible actual results error in the future this will be a fatal error dict object has no element ,1 1744,6574917778.0,IssuesEvent,2017-09-11 14:29:23,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_metric_alarm does not recognize provided credentials,affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_metric_alarm ##### ANSIBLE VERSION ``` C8H10N4O2:ansible mkramer$ ansible --version ansible 2.1.0.0 config file = /Users/mkramer/github/infrastructure/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OSX/AWS ##### SUMMARY Running the ec2_metric_alarm in a playbook results in the following error: ``` ""msg"": ""No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"" ``` It fails to recognize boto profiles, or exported environment vars. With some googling, I found other issues similar to this that occurred in other modules and one of the work arounds suggested was to use: ``` aws_access_key: ""{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"" aws_secret_key: ""{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"" ``` explicitly in the play. That works. ##### STEPS TO REPRODUCE - Create a simple play based on the ec2_metric_alarm module example in the docs. - Run with .aws/credentials profile or exported aws credentials. ``` tasks: - name: Gather facts action: ec2_facts - name: debug debug: var=ansible_ec2_instance_id - name: Create test alarm ec2_metric_alarm: #aws_access_key: ""{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"" #aws_secret_key: ""{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"" state: present region: '{{ default_region }}' name: ""cpu-low"" metric: ""CPUUtilization"" namespace: ""AWS/EC2"" statistic: Average comparison: ""<="" threshold: 5.0 period: 300 evaluation_periods: 3 unit: ""Percent"" description: ""This will alarm when a bamboo slave's cpu usage average is lower than 5% for 15 minutes "" dimensions: {'InstanceId':'{{ ansible_ec2_instance_id }}'} ``` ##### EXPECTED RESULTS I expect the play to run when using AWS_PROFILE= or at least after exporting aws credentials to environment vars. ##### ACTUAL RESULTS It dun borked: `""msg"": ""No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials""` ",True,"ec2_metric_alarm does not recognize provided credentials - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_metric_alarm ##### ANSIBLE VERSION ``` C8H10N4O2:ansible mkramer$ ansible --version ansible 2.1.0.0 config file = /Users/mkramer/github/infrastructure/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OSX/AWS ##### SUMMARY Running the ec2_metric_alarm in a playbook results in the following error: ``` ""msg"": ""No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"" ``` It fails to recognize boto profiles, or exported environment vars. With some googling, I found other issues similar to this that occurred in other modules and one of the work arounds suggested was to use: ``` aws_access_key: ""{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"" aws_secret_key: ""{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"" ``` explicitly in the play. That works. ##### STEPS TO REPRODUCE - Create a simple play based on the ec2_metric_alarm module example in the docs. - Run with .aws/credentials profile or exported aws credentials. ``` tasks: - name: Gather facts action: ec2_facts - name: debug debug: var=ansible_ec2_instance_id - name: Create test alarm ec2_metric_alarm: #aws_access_key: ""{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"" #aws_secret_key: ""{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"" state: present region: '{{ default_region }}' name: ""cpu-low"" metric: ""CPUUtilization"" namespace: ""AWS/EC2"" statistic: Average comparison: ""<="" threshold: 5.0 period: 300 evaluation_periods: 3 unit: ""Percent"" description: ""This will alarm when a bamboo slave's cpu usage average is lower than 5% for 15 minutes "" dimensions: {'InstanceId':'{{ ansible_ec2_instance_id }}'} ``` ##### EXPECTED RESULTS I expect the play to run when using AWS_PROFILE= or at least after exporting aws credentials to environment vars. ##### ACTUAL RESULTS It dun borked: `""msg"": ""No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials""` ",1, metric alarm does not recognize provided credentials issue type bug report component name metric alarm ansible version ansible mkramer ansible version ansible config file users mkramer github infrastructure ansible ansible cfg configured module search path default w o overrides os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific osx aws summary running the metric alarm in a playbook results in the following error msg no handler was ready to authenticate handlers were checked check your credentials it fails to recognize boto profiles or exported environment vars with some googling i found other issues similar to this that occurred in other modules and one of the work arounds suggested was to use aws access key lookup env aws access key id aws secret key lookup env aws secret access key explicitly in the play that works steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create a simple play based on the metric alarm module example in the docs run with aws credentials profile or exported aws credentials tasks name gather facts action facts name debug debug var ansible instance id name create test alarm metric alarm aws access key lookup env aws access key id aws secret key lookup env aws secret access key state present region default region name cpu low metric cpuutilization namespace aws statistic average comparison threshold period evaluation periods unit percent description this will alarm when a bamboo slave s cpu usage average is lower than for minutes dimensions instanceid ansible instance id expected results i expect the play to run when using aws profile or at least after exporting aws credentials to environment vars actual results it dun borked msg no handler was ready to authenticate handlers were checked check your credentials ,1 1203,5135313505.0,IssuesEvent,2017-01-11 11:54:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,find module doesn't recognize symlinks correctly,affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type: Bug Report ##### Ansible Version: 2.0.2 ##### Ansible Configuration: default configuration ##### Environment: debian jessie ##### Summary: When using the ""find"" module, symlinks are falsely identified as regular files. ##### Steps To Reproduce: How to reproduce: ``` mkdir /tmp/test touch /tmp/file ln -s /tmp/file /tmp/test/symlink ``` then run a playbook like this: ``` --- - hosts: localhost tasks: - find: paths=""/tmp/test"" register: find_result - debug: var=find_result ``` ##### Expected Results: ``` ""files"": [ { ""islnk"": true, ""isreg"": false, ""path"": ""/tmp/test/symlink"", } ] ``` ##### Actual Results: In the output, you will see ``` ""files"": [ { ""islnk"": false, ""isreg"": true, ""path"": ""/tmp/test/symlink"", } ] ``` When running `- find: paths=""/tmp/test"" file_type=file` to explicitly only match regular files, the result is the same (the symlink is in the results list). ##### Solution The reason for this is that [os.stat](https://docs.python.org/2/library/os.html#os.stat) is used to get the file info: https://github.com/ansible/ansible-modules-core/blob/devel/files/find.py#L316, and `os.stat` follows symlinks. Instead, [os.lstat](https://docs.python.org/2/library/os.html#os.lstat) should be used, which does not follow symlinks. It would also be possible to add an additional option `follow_symlinks=yes/no` to determine behaviour for this, but IMO, not following symlinks should be the default. ",True,"find module doesn't recognize symlinks correctly - ##### Issue Type: Bug Report ##### Ansible Version: 2.0.2 ##### Ansible Configuration: default configuration ##### Environment: debian jessie ##### Summary: When using the ""find"" module, symlinks are falsely identified as regular files. ##### Steps To Reproduce: How to reproduce: ``` mkdir /tmp/test touch /tmp/file ln -s /tmp/file /tmp/test/symlink ``` then run a playbook like this: ``` --- - hosts: localhost tasks: - find: paths=""/tmp/test"" register: find_result - debug: var=find_result ``` ##### Expected Results: ``` ""files"": [ { ""islnk"": true, ""isreg"": false, ""path"": ""/tmp/test/symlink"", } ] ``` ##### Actual Results: In the output, you will see ``` ""files"": [ { ""islnk"": false, ""isreg"": true, ""path"": ""/tmp/test/symlink"", } ] ``` When running `- find: paths=""/tmp/test"" file_type=file` to explicitly only match regular files, the result is the same (the symlink is in the results list). ##### Solution The reason for this is that [os.stat](https://docs.python.org/2/library/os.html#os.stat) is used to get the file info: https://github.com/ansible/ansible-modules-core/blob/devel/files/find.py#L316, and `os.stat` follows symlinks. Instead, [os.lstat](https://docs.python.org/2/library/os.html#os.lstat) should be used, which does not follow symlinks. It would also be possible to add an additional option `follow_symlinks=yes/no` to determine behaviour for this, but IMO, not following symlinks should be the default. ",1,find module doesn t recognize symlinks correctly issue type bug report ansible version ansible configuration default configuration environment debian jessie summary when using the find module symlinks are falsely identified as regular files steps to reproduce how to reproduce mkdir tmp test touch tmp file ln s tmp file tmp test symlink then run a playbook like this hosts localhost tasks find paths tmp test register find result debug var find result expected results files islnk true isreg false path tmp test symlink actual results in the output you will see files islnk false isreg true path tmp test symlink when running find paths tmp test file type file to explicitly only match regular files the result is the same the symlink is in the results list solution the reason for this is that is used to get the file info and os stat follows symlinks instead should be used which does not follow symlinks it would also be possible to add an additional option follow symlinks yes no to determine behaviour for this but imo not following symlinks should be the default ,1 1401,6025460174.0,IssuesEvent,2017-06-08 08:45:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Feature: Make Subversion module better behaved with local modified files,affects_2.0 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Stock ##### OS / ENVIRONMENT Ansible server: ubuntu server 14.04LTS target(s): ubuntu server 14.04LTS ##### SUMMARY This is a feature request. Currently when doing an SVN update using the subversion toolset, an `update` will leave local moded files alone, and update the rest. With Ansible, using the subversion module, the documentation says that if the repository exists, then the play will update the files, but if the there are local modified files, then it will **fail**. Or we can set the `force` option and it will discard the local modified files. This behavior is not consistent with the SVN tool set. Update has a specific meaning and behavior. The force option essentially is a `revert` and `update`. I propose that the module behave more like the SVN tool set, and let an update be an update, and let a check out be a checkout and a revert be a revert. I do however, find the usefulness of the ability to know that there are local mods, so maybe an expanded option set would be best. ##### STEPS TO REPRODUCE Instead of `force`, use `revert`. `subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob revert=true` This option would be the same as `svn revert -R ` being issued and then an `update`. `subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob` With no options specified, it would be like a `svn update ` if the repo exists, or a `svn co ` if the repo doesn't exist. If the repo does exist and there are local mods, then the update could issue warning or info text to the Ansible user. ##### EXPECTED RESULTS **instead of** `fatal: [www.local]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""ERROR: modified files exist in the repository.""}` **Maybe:** `info: [www.local]: INFO! => {""changed"": true, ""failed"": false, ""msg"": ""WARNING: modified files exist in the repository.""}` and of course the files would have been updated. ",True,"Feature: Make Subversion module better behaved with local modified files - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Stock ##### OS / ENVIRONMENT Ansible server: ubuntu server 14.04LTS target(s): ubuntu server 14.04LTS ##### SUMMARY This is a feature request. Currently when doing an SVN update using the subversion toolset, an `update` will leave local moded files alone, and update the rest. With Ansible, using the subversion module, the documentation says that if the repository exists, then the play will update the files, but if the there are local modified files, then it will **fail**. Or we can set the `force` option and it will discard the local modified files. This behavior is not consistent with the SVN tool set. Update has a specific meaning and behavior. The force option essentially is a `revert` and `update`. I propose that the module behave more like the SVN tool set, and let an update be an update, and let a check out be a checkout and a revert be a revert. I do however, find the usefulness of the ability to know that there are local mods, so maybe an expanded option set would be best. ##### STEPS TO REPRODUCE Instead of `force`, use `revert`. `subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob revert=true` This option would be the same as `svn revert -R ` being issued and then an `update`. `subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob` With no options specified, it would be like a `svn update ` if the repo exists, or a `svn co ` if the repo doesn't exist. If the repo does exist and there are local mods, then the update could issue warning or info text to the Ansible user. ##### EXPECTED RESULTS **instead of** `fatal: [www.local]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""ERROR: modified files exist in the repository.""}` **Maybe:** `info: [www.local]: INFO! => {""changed"": true, ""failed"": false, ""msg"": ""WARNING: modified files exist in the repository.""}` and of course the files would have been updated. ",1,feature make subversion module better behaved with local modified files issue type feature idea component name subversion ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration stock os environment ansible server ubuntu server target s ubuntu server summary this is a feature request currently when doing an svn update using the subversion toolset an update will leave local moded files alone and update the rest with ansible using the subversion module the documentation says that if the repository exists then the play will update the files but if the there are local modified files then it will fail or we can set the force option and it will discard the local modified files this behavior is not consistent with the svn tool set update has a specific meaning and behavior the force option essentially is a revert and update i propose that the module behave more like the svn tool set and let an update be an update and let a check out be a checkout and a revert be a revert i do however find the usefulness of the ability to know that there are local mods so maybe an expanded option set would be best steps to reproduce instead of force use revert subversion repo dest var site roots mob revert true this option would be the same as svn revert r being issued and then an update subversion repo dest var site roots mob with no options specified it would be like a svn update if the repo exists or a svn co if the repo doesn t exist if the repo does exist and there are local mods then the update could issue warning or info text to the ansible user expected results instead of fatal failed changed false failed true msg error modified files exist in the repository maybe info info changed true failed false msg warning modified files exist in the repository and of course the files would have been updated ,1 1091,4953034651.0,IssuesEvent,2016-12-01 13:59:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container never starts containers,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `docker_container` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /usr/src/playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ``` Linux 37059fd9679f 3.13.0-91-generic #138-Ubuntu SMP Fri Jun 24 17:00:34 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY Containers are created with the wrong name, outside network and never start ##### STEPS TO REPRODUCE Run a playbook that does the following: ``` - name: ensure docker network exists docker_network: appends: True connected: - existing_container name: my_network state: present - name: ensure new container is running docker_container: image: mongo:3.2 name: new-container networks: - name: my_network state: started ``` ##### EXPECTED RESULTS A container named `new-container` running, and in the `my_networ` network ##### ACTUAL RESULTS Container with generic name is `Created` outside network, not running [`ansible` `-vvvvv` output](https://gist.github.com/gvilarino/5b4c71773dee722005ca3777230f1cda) [`dockerd -D` output](https://gist.github.com/gvilarino/b8acb2d76dfdc02f133da29766718614) `docker ps -a` yields: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c2d470cdcc51 mongo:3.2 ""/entrypoint.sh mongo"" 21 minutes ago Created stupefied_pasteur ``` According to the docker daemon logs, the daemon never gets the `/start` instruction, just the `/create` ",True,"docker_container never starts containers - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `docker_container` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /usr/src/playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ``` Linux 37059fd9679f 3.13.0-91-generic #138-Ubuntu SMP Fri Jun 24 17:00:34 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY Containers are created with the wrong name, outside network and never start ##### STEPS TO REPRODUCE Run a playbook that does the following: ``` - name: ensure docker network exists docker_network: appends: True connected: - existing_container name: my_network state: present - name: ensure new container is running docker_container: image: mongo:3.2 name: new-container networks: - name: my_network state: started ``` ##### EXPECTED RESULTS A container named `new-container` running, and in the `my_networ` network ##### ACTUAL RESULTS Container with generic name is `Created` outside network, not running [`ansible` `-vvvvv` output](https://gist.github.com/gvilarino/5b4c71773dee722005ca3777230f1cda) [`dockerd -D` output](https://gist.github.com/gvilarino/b8acb2d76dfdc02f133da29766718614) `docker ps -a` yields: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c2d470cdcc51 mongo:3.2 ""/entrypoint.sh mongo"" 21 minutes ago Created stupefied_pasteur ``` According to the docker daemon logs, the daemon never gets the `/start` instruction, just the `/create` ",1,docker container never starts containers issue type bug report component name docker container ansible version ansible config file usr src playbooks ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment linux generic ubuntu smp fri jun utc gnu linux summary containers are created with the wrong name outside network and never start steps to reproduce run a playbook that does the following name ensure docker network exists docker network appends true connected existing container name my network state present name ensure new container is running docker container image mongo name new container networks name my network state started expected results a container named new container running and in the my networ network actual results container with generic name is created outside network not running docker ps a yields container id image command created status ports names mongo entrypoint sh mongo minutes ago created stupefied pasteur according to the docker daemon logs the daemon never gets the start instruction just the create ,1 1153,5029405418.0,IssuesEvent,2016-12-15 21:05:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,files module copy: error copying files into fuse filesystem,affects_2.2 bug_report easyfix waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = [...]/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [privilege_escalation] become=true [defaults] host_key_checking=False ``` ##### OS / ENVIRONMENT Host OS: * macOS Sierra 10.12.1 * Python 2.7.12 Managed OS: * Debian Jessie with Proxmox 4.3 installed * Python 2.7.9 ##### SUMMARY I want to copy the firewall configuration file of Proxmox to the managed server. But instead of copying, I get an error. I was able to isolate the issue to copying the file into a FUSE filesystem. ##### STEPS TO REPRODUCE Module invokation: ``` copy: src: 'cluster.fw' dest: '/etc/pve/firewall/cluster.fw' ``` Unfortunately my *nix expertise is not sufficient to give you any hints of how to configure a fuse filesystem for yourself, but after installing proxmox, the folder /etc/pve is actually a mount point to a fuse filesystem. But I think this problem occurs with every fuse filesystem? ##### EXPECTED RESULTS The file should be copied without an error or at least the error message should be more descriptive. ##### ACTUAL RESULTS The error message is as follows (I inserted some line breaks into ""module_stdout"" for better readability): ``` fatal: [my_host]: FAILED! => { ""changed"": false, ""checksum"": ""783551ae407f9d3396749507d505c2c22f8fc09f"", ""failed"": true, ""invocation"": { ""module_args"": { ""dest"": ""/etc/pve/firewall/cluster.fw"", ""src"": ""cluster.fw"" }, ""module_name"": ""copy"" }, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_7ulKiR/ansible_module_copy.py\"", line 364, in \r\n main()\r\n File \""/tmp/ansible_7ulKiR/ansible_module_copy.py\"", line 343, in main\r\n module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])\r\n File \""/tmp/ansible_7ulKiR/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 2003, in atomic_move\r\nNameError: global name 'exception' is not defined\r\n"", ""msg"": ""MODULE FAILURE"" } ``` If I copy the file to another folder (e.g. /etc or /tmp) the error doesn't occur. Interestingly enough, if I copy the file to /tmp and then invoke the copy module as follows, the error occurs neither: ``` copy: src: '/tmp/cluster.fw' remote_src: yes dest: '/etc/pve/firewall/cluster.fw' ``` ",True,"files module copy: error copying files into fuse filesystem - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = [...]/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [privilege_escalation] become=true [defaults] host_key_checking=False ``` ##### OS / ENVIRONMENT Host OS: * macOS Sierra 10.12.1 * Python 2.7.12 Managed OS: * Debian Jessie with Proxmox 4.3 installed * Python 2.7.9 ##### SUMMARY I want to copy the firewall configuration file of Proxmox to the managed server. But instead of copying, I get an error. I was able to isolate the issue to copying the file into a FUSE filesystem. ##### STEPS TO REPRODUCE Module invokation: ``` copy: src: 'cluster.fw' dest: '/etc/pve/firewall/cluster.fw' ``` Unfortunately my *nix expertise is not sufficient to give you any hints of how to configure a fuse filesystem for yourself, but after installing proxmox, the folder /etc/pve is actually a mount point to a fuse filesystem. But I think this problem occurs with every fuse filesystem? ##### EXPECTED RESULTS The file should be copied without an error or at least the error message should be more descriptive. ##### ACTUAL RESULTS The error message is as follows (I inserted some line breaks into ""module_stdout"" for better readability): ``` fatal: [my_host]: FAILED! => { ""changed"": false, ""checksum"": ""783551ae407f9d3396749507d505c2c22f8fc09f"", ""failed"": true, ""invocation"": { ""module_args"": { ""dest"": ""/etc/pve/firewall/cluster.fw"", ""src"": ""cluster.fw"" }, ""module_name"": ""copy"" }, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_7ulKiR/ansible_module_copy.py\"", line 364, in \r\n main()\r\n File \""/tmp/ansible_7ulKiR/ansible_module_copy.py\"", line 343, in main\r\n module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])\r\n File \""/tmp/ansible_7ulKiR/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 2003, in atomic_move\r\nNameError: global name 'exception' is not defined\r\n"", ""msg"": ""MODULE FAILURE"" } ``` If I copy the file to another folder (e.g. /etc or /tmp) the error doesn't occur. Interestingly enough, if I copy the file to /tmp and then invoke the copy module as follows, the error occurs neither: ``` copy: src: '/tmp/cluster.fw' remote_src: yes dest: '/etc/pve/firewall/cluster.fw' ``` ",1,files module copy error copying files into fuse filesystem issue type bug report component name copy ansible version ansible config file ansible cfg configured module search path default w o overrides configuration become true host key checking false os environment host os macos sierra python managed os debian jessie with proxmox installed python summary i want to copy the firewall configuration file of proxmox to the managed server but instead of copying i get an error i was able to isolate the issue to copying the file into a fuse filesystem steps to reproduce module invokation copy src cluster fw dest etc pve firewall cluster fw unfortunately my nix expertise is not sufficient to give you any hints of how to configure a fuse filesystem for yourself but after installing proxmox the folder etc pve is actually a mount point to a fuse filesystem but i think this problem occurs with every fuse filesystem expected results the file should be copied without an error or at least the error message should be more descriptive actual results the error message is as follows i inserted some line breaks into module stdout for better readability fatal failed changed false checksum failed true invocation module args dest etc pve firewall cluster fw src cluster fw module name copy module stderr module stdout traceback most recent call last r n file tmp ansible ansible module copy py line in r n main r n file tmp ansible ansible module copy py line in main r n module atomic move b mysrc dest unsafe writes module params r n file tmp ansible ansible modlib zip ansible module utils basic py line in atomic move r nnameerror global name exception is not defined r n msg module failure if i copy the file to another folder e g etc or tmp the error doesn t occur interestingly enough if i copy the file to tmp and then invoke the copy module as follows the error occurs neither copy src tmp cluster fw remote src yes dest etc pve firewall cluster fw ,1 1886,6577522049.0,IssuesEvent,2017-09-12 01:30:03,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,template: and copy: can't locate their input files - 2.0.1.0 regression from 1.9.4,affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: template or copy module ##### Ansible Version: ``` ansible 2.0.1.0 config file = /home/granjan/.ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: Nothing relevant ##### Environment: N/A ##### Summary: test-template.yml: ``` - hosts: localhost roles: - test_template tasks: - include: roles/test_template/tasks/test.yml ``` roles/test_template/defaults/main.yml: ``` # Empty, needs to exist on ansible 1.9.4 ``` roles/test_template/tasks/test.yml: ``` - template: dest: /tmp/template.out src: template.out.j2 ``` roles/test_template/templates/template.out.j2: ``` Just a trivial template ``` The above fails to find template.out.j2 ##### Steps To Reproduce: ``` % ansible-playbook test-template.yml [WARNING]: provided hosts list is empty, only localhost is available PLAY *************************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [include] ***************************************************************** included: /path/to/playbook_dir/roles/test_template/tasks/test.yml for localhost TASK [template] **************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""IOError: [Errno 2] No such file or directory: u'/path/to/playbook_dir/template.out.j2'""} to retry, use: --limit @test-template.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=1 ``` ##### Expected Results: On ansible 1.9.4, the correct template file is located relative to the role/task that contains the template: operation. copy: is also affected by this. ##### Actual Results: See above. ",True,"template: and copy: can't locate their input files - 2.0.1.0 regression from 1.9.4 - ##### Issue Type: - Bug Report ##### Plugin Name: template or copy module ##### Ansible Version: ``` ansible 2.0.1.0 config file = /home/granjan/.ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: Nothing relevant ##### Environment: N/A ##### Summary: test-template.yml: ``` - hosts: localhost roles: - test_template tasks: - include: roles/test_template/tasks/test.yml ``` roles/test_template/defaults/main.yml: ``` # Empty, needs to exist on ansible 1.9.4 ``` roles/test_template/tasks/test.yml: ``` - template: dest: /tmp/template.out src: template.out.j2 ``` roles/test_template/templates/template.out.j2: ``` Just a trivial template ``` The above fails to find template.out.j2 ##### Steps To Reproduce: ``` % ansible-playbook test-template.yml [WARNING]: provided hosts list is empty, only localhost is available PLAY *************************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [include] ***************************************************************** included: /path/to/playbook_dir/roles/test_template/tasks/test.yml for localhost TASK [template] **************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""IOError: [Errno 2] No such file or directory: u'/path/to/playbook_dir/template.out.j2'""} to retry, use: --limit @test-template.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=1 ``` ##### Expected Results: On ansible 1.9.4, the correct template file is located relative to the role/task that contains the template: operation. copy: is also affected by this. ##### Actual Results: See above. ",1,template and copy can t locate their input files regression from issue type bug report plugin name template or copy module ansible version ansible config file home granjan ansible cfg configured module search path default w o overrides ansible configuration nothing relevant environment n a summary test template yml hosts localhost roles test template tasks include roles test template tasks test yml roles test template defaults main yml empty needs to exist on ansible roles test template tasks test yml template dest tmp template out src template out roles test template templates template out just a trivial template the above fails to find template out steps to reproduce ansible playbook test template yml provided hosts list is empty only localhost is available play task ok task included path to playbook dir roles test template tasks test yml for localhost task fatal failed changed false failed true msg ioerror no such file or directory u path to playbook dir template out to retry use limit test template retry play recap localhost ok changed unreachable failed expected results on ansible the correct template file is located relative to the role task that contains the template operation copy is also affected by this actual results see above ,1 1694,6574204930.0,IssuesEvent,2017-09-11 11:57:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"nxos_facts: local_intf = item['ROW_nbor']['l_port_id'] -> TypeError: list indices must be integers, not str",affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /mnt/c/Users/emarq/Documents/Source/Repos/Solutions.Network.Automation/MAS/Ansible/test/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] hostfile=hosts ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/id_rsa host_key_checking=False ##### OS / ENVIRONMENT ##### SUMMARY failure to execute facts commands on the remote switch. ##### STEPS TO REPRODUCE ``` --- - name: get facts hosts: switch gather_facts: no connection: local vars: cli: username: ""{{ ansible_user }}"" host: ""{{ ansible_host }}"" ssh_keyfile: ~/.ssh/id_rsa.pub transport: cli tasks: - nxos_command: commands: - show version host: ""{{ ansible_host }}"" provider: ""{{ cli }}"" - nxos_facts: gather_subset: all host: ""{{ ansible_host }}"" provider: ""{{ cli }}"" ``` ##### EXPECTED RESULTS generate a list of nxos facts ##### ACTUAL RESULTS ``` TASK [nxos_command] ************************************************************ task path: /mnt/c/Users/emarq/Documents/Source/Repos/Solutions.Network.Automation/MAS/Ansible/test/test.yml:14 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.10.229.20> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.229.20> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523 `"" && echo ansible-tmp-1478876805.99-82189371664523=""` echo $HOME/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523 `"" ) && sleep 0' <10.10.229.20> PUT /tmp/tmpjv58Ja TO /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/nxos_command.py <10.10.229.20> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/ /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/nxos_command.py && sleep 0' <10.10.229.20> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/nxos_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/"" > /dev/null 2>&1 && sleep 0' ok: [rr1-n22-r11-x32sp-4a] => { ""changed"": false, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""show version"" ], ""host"": ""10.10.229.20"", ""interval"": 1, ""match"": ""all"", ""password"": null, ""port"": null, ""provider"": { ""host"": ""10.10.229.20"", ""ssh_keyfile"": ""~/.ssh/id_rsa.pub"", ""transport"": ""cli"", ""username"": ""admin"" }, ""retries"": 10, ""ssh_keyfile"": ""/home/emarq/.ssh/id_rsa.pub"", ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""nxos_command"" }, ""stdout"": [ ""\nCisco Nexus Operating System (NX-OS) Software\nTAC support: http://www.cisco.com/tac\nDocuments: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html\nCopyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.\nThe copyrights to certain works contained herein are owned by\nother third parties and are used and distributed under license.\nSome parts of this software are covered under the GNU Public\nLicense. A copy of the license is available at\nhttp://www.gnu.org/licenses/gpl.html.\n\nSoftware\n BIOS: version 3.5.0\n loader: version N/A\n kickstart: version 6.0(2)U6(6)\n system: version 6.0(2)U6(6)\n Power Sequencer Firmware: \n Module 1: version v1.1\n BIOS compile time: 09/14/2015\n kickstart image file is: bootflash:///n3000-uk9-kickstart.6.0.2.U6.6.bin\n kickstart compile time: 2/16/2016 1:00:00 [02/16/2016 02:01:17]\n system image file is: bootflash:///n3000-uk9.6.0.2.U6.6.bin\n system compile time: 2/16/2016 1:00:00 [02/16/2016 02:27:15]\n\n\nHardware\n cisco Nexus 3132 Chassis (\""32x40G Supervisor\"")\n Intel(R) Core(TM) i3-3227U CPU @ 2.50GHz with 7891916 kB of memory.\n Processor Board ID FOC19464VH1\n\n Device name: rr1-n22-r11-x32sp-4a\n bootflash: 15269888 kB\n\nKernel uptime is 3 day(s), 7 hour(s), 48 minute(s), 1 second(s)\n\nLast reset at 822894 usecs after Mon Nov 7 23:17:48 2016\n\n Reason: Reset Requested by CLI command reload\n System version: 6.0(2)U6(6)\n Service: \n\nplugin\n Core Plugin, Ethernet Plugin\n"" ], ""stdout_lines"": [ [ """", ""Cisco Nexus Operating System (NX-OS) Software"", ""TAC support: http://www.cisco.com/tac"", ""Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html"", ""Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved."", ""The copyrights to certain works contained herein are owned by"", ""other third parties and are used and distributed under license."", ""Some parts of this software are covered under the GNU Public"", ""License. A copy of the license is available at"", ""http://www.gnu.org/licenses/gpl.html."", """", ""Software"", "" BIOS: version 3.5.0"", "" loader: version N/A"", "" kickstart: version 6.0(2)U6(6)"", "" system: version 6.0(2)U6(6)"", "" Power Sequencer Firmware: "", "" Module 1: version v1.1"", "" BIOS compile time: 09/14/2015"", "" kickstart image file is: bootflash:///n3000-uk9-kickstart.6.0.2.U6.6.bin"", "" kickstart compile time: 2/16/2016 1:00:00 [02/16/2016 02:01:17]"", "" system image file is: bootflash:///n3000-uk9.6.0.2.U6.6.bin"", "" system compile time: 2/16/2016 1:00:00 [02/16/2016 02:27:15]"", """", """", ""Hardware"", "" cisco Nexus 3132 Chassis (\""32x40G Supervisor\"")"", "" Intel(R) Core(TM) i3-3227U CPU @ 2.50GHz with 7891916 kB of memory."", "" Processor Board ID FOC19464VH1"", """", "" Device name: rr1-n22-r11-x32sp-4a"", "" bootflash: 15269888 kB"", """", ""Kernel uptime is 3 day(s), 7 hour(s), 48 minute(s), 1 second(s)"", """", ""Last reset at 822894 usecs after Mon Nov 7 23:17:48 2016"", """", "" Reason: Reset Requested by CLI command reload"", "" System version: 6.0(2)U6(6)"", "" Service: "", """", ""plugin"", "" Core Plugin, Ethernet Plugin"", """" ] ], ""warnings"": [] } TASK [nxos_facts] ************************************************************** task path: /mnt/c/Users/emarq/Documents/Source/Repos/Solutions.Network.Automation/MAS/Ansible/test/test.yml:20 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py <10.10.229.20> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.229.20> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761 `"" && echo ansible-tmp-1478876809.3-42326483578761=""` echo $HOME/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761 `"" ) && sleep 0' <10.10.229.20> PUT /tmp/tmpD1LFYa TO /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/nxos_facts.py <10.10.229.20> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/ /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/nxos_facts.py && sleep 0' <10.10.229.20> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/nxos_facts.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 531, in main() File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 512, in main inst.populate() File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 312, in populate self.facts['neighbors'] = self.populate_neighbors(data) File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 337, in populate_neighbors local_intf = item['ROW_nbor']['l_port_id'] TypeError: list indices must be integers, not str fatal: [rr1-n22-r11-x32sp-4a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_facts"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 531, in \n main()\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 512, in main\n inst.populate()\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 312, in populate\n self.facts['neighbors'] = self.populate_neighbors(data)\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 337, in populate_neighbors\n local_intf = item['ROW_nbor']['l_port_id']\nTypeError: list indices must be integers, not str\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ",True,"nxos_facts: local_intf = item['ROW_nbor']['l_port_id'] -> TypeError: list indices must be integers, not str - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /mnt/c/Users/emarq/Documents/Source/Repos/Solutions.Network.Automation/MAS/Ansible/test/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] hostfile=hosts ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/id_rsa host_key_checking=False ##### OS / ENVIRONMENT ##### SUMMARY failure to execute facts commands on the remote switch. ##### STEPS TO REPRODUCE ``` --- - name: get facts hosts: switch gather_facts: no connection: local vars: cli: username: ""{{ ansible_user }}"" host: ""{{ ansible_host }}"" ssh_keyfile: ~/.ssh/id_rsa.pub transport: cli tasks: - nxos_command: commands: - show version host: ""{{ ansible_host }}"" provider: ""{{ cli }}"" - nxos_facts: gather_subset: all host: ""{{ ansible_host }}"" provider: ""{{ cli }}"" ``` ##### EXPECTED RESULTS generate a list of nxos facts ##### ACTUAL RESULTS ``` TASK [nxos_command] ************************************************************ task path: /mnt/c/Users/emarq/Documents/Source/Repos/Solutions.Network.Automation/MAS/Ansible/test/test.yml:14 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.10.229.20> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.229.20> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523 `"" && echo ansible-tmp-1478876805.99-82189371664523=""` echo $HOME/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523 `"" ) && sleep 0' <10.10.229.20> PUT /tmp/tmpjv58Ja TO /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/nxos_command.py <10.10.229.20> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/ /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/nxos_command.py && sleep 0' <10.10.229.20> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/nxos_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478876805.99-82189371664523/"" > /dev/null 2>&1 && sleep 0' ok: [rr1-n22-r11-x32sp-4a] => { ""changed"": false, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""show version"" ], ""host"": ""10.10.229.20"", ""interval"": 1, ""match"": ""all"", ""password"": null, ""port"": null, ""provider"": { ""host"": ""10.10.229.20"", ""ssh_keyfile"": ""~/.ssh/id_rsa.pub"", ""transport"": ""cli"", ""username"": ""admin"" }, ""retries"": 10, ""ssh_keyfile"": ""/home/emarq/.ssh/id_rsa.pub"", ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""nxos_command"" }, ""stdout"": [ ""\nCisco Nexus Operating System (NX-OS) Software\nTAC support: http://www.cisco.com/tac\nDocuments: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html\nCopyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.\nThe copyrights to certain works contained herein are owned by\nother third parties and are used and distributed under license.\nSome parts of this software are covered under the GNU Public\nLicense. A copy of the license is available at\nhttp://www.gnu.org/licenses/gpl.html.\n\nSoftware\n BIOS: version 3.5.0\n loader: version N/A\n kickstart: version 6.0(2)U6(6)\n system: version 6.0(2)U6(6)\n Power Sequencer Firmware: \n Module 1: version v1.1\n BIOS compile time: 09/14/2015\n kickstart image file is: bootflash:///n3000-uk9-kickstart.6.0.2.U6.6.bin\n kickstart compile time: 2/16/2016 1:00:00 [02/16/2016 02:01:17]\n system image file is: bootflash:///n3000-uk9.6.0.2.U6.6.bin\n system compile time: 2/16/2016 1:00:00 [02/16/2016 02:27:15]\n\n\nHardware\n cisco Nexus 3132 Chassis (\""32x40G Supervisor\"")\n Intel(R) Core(TM) i3-3227U CPU @ 2.50GHz with 7891916 kB of memory.\n Processor Board ID FOC19464VH1\n\n Device name: rr1-n22-r11-x32sp-4a\n bootflash: 15269888 kB\n\nKernel uptime is 3 day(s), 7 hour(s), 48 minute(s), 1 second(s)\n\nLast reset at 822894 usecs after Mon Nov 7 23:17:48 2016\n\n Reason: Reset Requested by CLI command reload\n System version: 6.0(2)U6(6)\n Service: \n\nplugin\n Core Plugin, Ethernet Plugin\n"" ], ""stdout_lines"": [ [ """", ""Cisco Nexus Operating System (NX-OS) Software"", ""TAC support: http://www.cisco.com/tac"", ""Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html"", ""Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved."", ""The copyrights to certain works contained herein are owned by"", ""other third parties and are used and distributed under license."", ""Some parts of this software are covered under the GNU Public"", ""License. A copy of the license is available at"", ""http://www.gnu.org/licenses/gpl.html."", """", ""Software"", "" BIOS: version 3.5.0"", "" loader: version N/A"", "" kickstart: version 6.0(2)U6(6)"", "" system: version 6.0(2)U6(6)"", "" Power Sequencer Firmware: "", "" Module 1: version v1.1"", "" BIOS compile time: 09/14/2015"", "" kickstart image file is: bootflash:///n3000-uk9-kickstart.6.0.2.U6.6.bin"", "" kickstart compile time: 2/16/2016 1:00:00 [02/16/2016 02:01:17]"", "" system image file is: bootflash:///n3000-uk9.6.0.2.U6.6.bin"", "" system compile time: 2/16/2016 1:00:00 [02/16/2016 02:27:15]"", """", """", ""Hardware"", "" cisco Nexus 3132 Chassis (\""32x40G Supervisor\"")"", "" Intel(R) Core(TM) i3-3227U CPU @ 2.50GHz with 7891916 kB of memory."", "" Processor Board ID FOC19464VH1"", """", "" Device name: rr1-n22-r11-x32sp-4a"", "" bootflash: 15269888 kB"", """", ""Kernel uptime is 3 day(s), 7 hour(s), 48 minute(s), 1 second(s)"", """", ""Last reset at 822894 usecs after Mon Nov 7 23:17:48 2016"", """", "" Reason: Reset Requested by CLI command reload"", "" System version: 6.0(2)U6(6)"", "" Service: "", """", ""plugin"", "" Core Plugin, Ethernet Plugin"", """" ] ], ""warnings"": [] } TASK [nxos_facts] ************************************************************** task path: /mnt/c/Users/emarq/Documents/Source/Repos/Solutions.Network.Automation/MAS/Ansible/test/test.yml:20 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py <10.10.229.20> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.229.20> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761 `"" && echo ansible-tmp-1478876809.3-42326483578761=""` echo $HOME/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761 `"" ) && sleep 0' <10.10.229.20> PUT /tmp/tmpD1LFYa TO /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/nxos_facts.py <10.10.229.20> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/ /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/nxos_facts.py && sleep 0' <10.10.229.20> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/nxos_facts.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478876809.3-42326483578761/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 531, in main() File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 512, in main inst.populate() File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 312, in populate self.facts['neighbors'] = self.populate_neighbors(data) File ""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py"", line 337, in populate_neighbors local_intf = item['ROW_nbor']['l_port_id'] TypeError: list indices must be integers, not str fatal: [rr1-n22-r11-x32sp-4a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_facts"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 531, in \n main()\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 512, in main\n inst.populate()\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 312, in populate\n self.facts['neighbors'] = self.populate_neighbors(data)\n File \""/tmp/ansible_iL9Hip/ansible_module_nxos_facts.py\"", line 337, in populate_neighbors\n local_intf = item['ROW_nbor']['l_port_id']\nTypeError: list indices must be integers, not str\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ",1,nxos facts local intf item typeerror list indices must be integers not str issue type bug report component name ansible version ansible config file mnt c users emarq documents source repos solutions network automation mas ansible test ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables hostfile hosts ansible ssh user admin ansible ssh private key file home emarq ssh id rsa host key checking false os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary failure to execute facts commands on the remote switch steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name get facts hosts switch gather facts no connection local vars cli username ansible user host ansible host ssh keyfile ssh id rsa pub transport cli tasks nxos command commands show version host ansible host provider cli nxos facts gather subset all host ansible host provider cli expected results generate a list of nxos facts actual results task task path mnt c users emarq documents source repos solutions network automation mas ansible test test yml using module file usr lib dist packages ansible modules core network nxos nxos command py establish local connection for user emarq exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home emarq ansible tmp ansible tmp nxos command py exec bin sh c chmod u x home emarq ansible tmp ansible tmp home emarq ansible tmp ansible tmp nxos command py sleep exec bin sh c usr bin python home emarq ansible tmp ansible tmp nxos command py rm rf home emarq ansible tmp ansible tmp dev null sleep ok changed false invocation module args auth pass null authorize false commands show version host interval match all password null port null provider host ssh keyfile ssh id rsa pub transport cli username admin retries ssh keyfile home emarq ssh id rsa pub timeout transport cli use ssl false username admin validate certs true wait for null module name nxos command stdout ncisco nexus operating system nx os software ntac support c cisco systems inc all rights reserved nthe copyrights to certain works contained herein are owned by nother third parties and are used and distributed under license nsome parts of this software are covered under the gnu public nlicense a copy of the license is available at n bios version n loader version n a n kickstart version n system version n power sequencer firmware n module version n bios compile time n kickstart image file is bootflash kickstart bin n kickstart compile time n system image file is bootflash bin n system compile time n n nhardware n cisco nexus chassis supervisor n intel r core tm cpu with kb of memory n processor board id n n device name n bootflash kb n nkernel uptime is day s hour s minute s second s n nlast reset at usecs after mon nov n n reason reset requested by cli command reload n system version n service n nplugin n core plugin ethernet plugin n stdout lines cisco nexus operating system nx os software tac support documents copyright c cisco systems inc all rights reserved the copyrights to certain works contained herein are owned by other third parties and are used and distributed under license some parts of this software are covered under the gnu public license a copy of the license is available at software bios version loader version n a kickstart version system version power sequencer firmware module version bios compile time kickstart image file is bootflash kickstart bin kickstart compile time system image file is bootflash bin system compile time hardware cisco nexus chassis supervisor intel r core tm cpu with kb of memory processor board id device name bootflash kb kernel uptime is day s hour s minute s second s last reset at usecs after mon nov reason reset requested by cli command reload system version service plugin core plugin ethernet plugin warnings task task path mnt c users emarq documents source repos solutions network automation mas ansible test test yml using module file usr lib dist packages ansible modules core network nxos nxos facts py establish local connection for user emarq exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home emarq ansible tmp ansible tmp nxos facts py exec bin sh c chmod u x home emarq ansible tmp ansible tmp home emarq ansible tmp ansible tmp nxos facts py sleep exec bin sh c usr bin python home emarq ansible tmp ansible tmp nxos facts py rm rf home emarq ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module nxos facts py line in main file tmp ansible ansible module nxos facts py line in main inst populate file tmp ansible ansible module nxos facts py line in populate self facts self populate neighbors data file tmp ansible ansible module nxos facts py line in populate neighbors local intf item typeerror list indices must be integers not str fatal failed changed false failed true invocation module name nxos facts module stderr traceback most recent call last n file tmp ansible ansible module nxos facts py line in n main n file tmp ansible ansible module nxos facts py line in main n inst populate n file tmp ansible ansible module nxos facts py line in populate n self facts self populate neighbors data n file tmp ansible ansible module nxos facts py line in populate neighbors n local intf item ntypeerror list indices must be integers not str n module stdout msg module failure ,1 1686,6574166156.0,IssuesEvent,2017-09-11 11:47:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ec2_elb_lb misconfigures minimum size, desired size, and maximum size",affects_2.2 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Operating on: ubuntu 14.04 Managing: ubuntu 14.04 ##### SUMMARY When updating an existing ELB with min size 2 and max size 10, inevitably sometimes the resulting configuration has a min size of 3 and max size of 11 (as well as a desired size of 3). This doesn't show up anywhere in my playbook (see below). ##### STEPS TO REPRODUCE ``` --- # roles/auto-scaling/tasks/main.yml - name: Retrieve current Auto Scaling Group properties command: ""aws --region {{ region }} autoscaling describe-auto-scaling-groups --auto-scaling-group-names webapp-{{ e }}"" register: asg_properties_result - name: Set asg_properties variable from JSON output if the Auto Scaling Group already exists set_fact: asg_properties: ""{{ (asg_properties_result.stdout | from_json).AutoScalingGroups[0] }}"" when: (asg_properties_result.stdout | from_json).AutoScalingGroups | count - name: Configure Auto Scaling Group and perform rolling deploy ec2_asg: region: ""{{ region }}"" name: ""webapp-{{ e }}"" launch_config_name: ""{{ ec2_lc.name }}"" availability_zones: ""{{ zone }}"" health_check_type: ELB health_check_period: 300 tags: - Name: ""webapp-{{ e }}"" environment: ""{{ e }}"" desired_capacity: 2 # desired_capacity: ""{{ asg_properties.DesiredCapacity | default(2) }}"" # broken due to bug in ansible 2.0.1.0 replace_all_instances: yes replace_batch_size: 1 # replace_batch_size: ""{{ (asg_properties.DesiredCapacity | default(2) / 4) | round(0, 'ceil') | int }}"" # broken due to bug in ansible 2.0.1.0 min_size: 2 max_size: 10 load_balancers: - ""webapp-{{ e }}"" state: present wait_timeout: 420 register: asg_result - name: Configure Scaling Policies ec2_scaling_policy: region: ""{{ region }}"" name: ""{{ item.name }}"" asg_name: ""webapp-{{ e }}"" state: present adjustment_type: ""{{ item.adjustment_type }}"" min_adjustment_step: ""{{ item.min_adjustment_step }}"" scaling_adjustment: ""{{ item.scaling_adjustment }}"" cooldown: ""{{ item.cooldown }}"" with_items: - name: ""Increase Group Size"" adjustment_type: ""ChangeInCapacity"" scaling_adjustment: +1 min_adjustment_step: 1 cooldown: 180 - name: ""Decrease Group Size"" adjustment_type: ""ChangeInCapacity"" scaling_adjustment: -1 min_adjustment_step: 1 cooldown: 300 register: sp_result - name: Determine Metric Alarm configuration set_fact: metric_alarms: - name: ""{{ asg_name }}-ScaleUp"" comparison: "">="" threshold: 50.0 alarm_actions: - ""{{ sp_result.results[0].arn }}"" - name: ""{{ asg_name }}-ScaleDown"" comparison: ""<="" threshold: 20.0 alarm_actions: - ""{{ sp_result.results[1].arn }}"" - name: Configure Metric Alarms and link to Scaling Policies ec2_metric_alarm: region: ""{{ region }}"" name: ""{{ item.name }}"" state: present metric: ""CPUUtilization"" namespace: ""AWS/EC2"" statistic: ""Average"" comparison: ""{{ item.comparison }}"" threshold: ""{{ item.threshold }}"" period: 60 evaluation_periods: 5 unit: ""Percent"" dimensions: AutoScalingGroupName: ""webapp-{{ e }}"" alarm_actions: ""{{ item.alarm_actions }}"" with_items: ""{{metric_alarms}}"" # when: max_size > 1 register: ma_result ``` ##### EXPECTED RESULTS I expect results that look like the bottom line in the table, instead I get results like the top line. ![alt text](https://i.gyazo.com/031adc4de5340c5361f2821169716fc2.png ""Results"") ##### ACTUAL RESULTS I expect results that look like the bottom line in the table, instead I get results like the top line. ![alt text](https://i.gyazo.com/031adc4de5340c5361f2821169716fc2.png ""Results"") ",True,"ec2_elb_lb misconfigures minimum size, desired size, and maximum size - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Operating on: ubuntu 14.04 Managing: ubuntu 14.04 ##### SUMMARY When updating an existing ELB with min size 2 and max size 10, inevitably sometimes the resulting configuration has a min size of 3 and max size of 11 (as well as a desired size of 3). This doesn't show up anywhere in my playbook (see below). ##### STEPS TO REPRODUCE ``` --- # roles/auto-scaling/tasks/main.yml - name: Retrieve current Auto Scaling Group properties command: ""aws --region {{ region }} autoscaling describe-auto-scaling-groups --auto-scaling-group-names webapp-{{ e }}"" register: asg_properties_result - name: Set asg_properties variable from JSON output if the Auto Scaling Group already exists set_fact: asg_properties: ""{{ (asg_properties_result.stdout | from_json).AutoScalingGroups[0] }}"" when: (asg_properties_result.stdout | from_json).AutoScalingGroups | count - name: Configure Auto Scaling Group and perform rolling deploy ec2_asg: region: ""{{ region }}"" name: ""webapp-{{ e }}"" launch_config_name: ""{{ ec2_lc.name }}"" availability_zones: ""{{ zone }}"" health_check_type: ELB health_check_period: 300 tags: - Name: ""webapp-{{ e }}"" environment: ""{{ e }}"" desired_capacity: 2 # desired_capacity: ""{{ asg_properties.DesiredCapacity | default(2) }}"" # broken due to bug in ansible 2.0.1.0 replace_all_instances: yes replace_batch_size: 1 # replace_batch_size: ""{{ (asg_properties.DesiredCapacity | default(2) / 4) | round(0, 'ceil') | int }}"" # broken due to bug in ansible 2.0.1.0 min_size: 2 max_size: 10 load_balancers: - ""webapp-{{ e }}"" state: present wait_timeout: 420 register: asg_result - name: Configure Scaling Policies ec2_scaling_policy: region: ""{{ region }}"" name: ""{{ item.name }}"" asg_name: ""webapp-{{ e }}"" state: present adjustment_type: ""{{ item.adjustment_type }}"" min_adjustment_step: ""{{ item.min_adjustment_step }}"" scaling_adjustment: ""{{ item.scaling_adjustment }}"" cooldown: ""{{ item.cooldown }}"" with_items: - name: ""Increase Group Size"" adjustment_type: ""ChangeInCapacity"" scaling_adjustment: +1 min_adjustment_step: 1 cooldown: 180 - name: ""Decrease Group Size"" adjustment_type: ""ChangeInCapacity"" scaling_adjustment: -1 min_adjustment_step: 1 cooldown: 300 register: sp_result - name: Determine Metric Alarm configuration set_fact: metric_alarms: - name: ""{{ asg_name }}-ScaleUp"" comparison: "">="" threshold: 50.0 alarm_actions: - ""{{ sp_result.results[0].arn }}"" - name: ""{{ asg_name }}-ScaleDown"" comparison: ""<="" threshold: 20.0 alarm_actions: - ""{{ sp_result.results[1].arn }}"" - name: Configure Metric Alarms and link to Scaling Policies ec2_metric_alarm: region: ""{{ region }}"" name: ""{{ item.name }}"" state: present metric: ""CPUUtilization"" namespace: ""AWS/EC2"" statistic: ""Average"" comparison: ""{{ item.comparison }}"" threshold: ""{{ item.threshold }}"" period: 60 evaluation_periods: 5 unit: ""Percent"" dimensions: AutoScalingGroupName: ""webapp-{{ e }}"" alarm_actions: ""{{ item.alarm_actions }}"" with_items: ""{{metric_alarms}}"" # when: max_size > 1 register: ma_result ``` ##### EXPECTED RESULTS I expect results that look like the bottom line in the table, instead I get results like the top line. ![alt text](https://i.gyazo.com/031adc4de5340c5361f2821169716fc2.png ""Results"") ##### ACTUAL RESULTS I expect results that look like the bottom line in the table, instead I get results like the top line. ![alt text](https://i.gyazo.com/031adc4de5340c5361f2821169716fc2.png ""Results"") ",1, elb lb misconfigures minimum size desired size and maximum size issue type bug report component name elb lb ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default os environment operating on ubuntu managing ubuntu summary when updating an existing elb with min size and max size inevitably sometimes the resulting configuration has a min size of and max size of as well as a desired size of this doesn t show up anywhere in my playbook see below steps to reproduce roles auto scaling tasks main yml name retrieve current auto scaling group properties command aws region region autoscaling describe auto scaling groups auto scaling group names webapp e register asg properties result name set asg properties variable from json output if the auto scaling group already exists set fact asg properties asg properties result stdout from json autoscalinggroups when asg properties result stdout from json autoscalinggroups count name configure auto scaling group and perform rolling deploy asg region region name webapp e launch config name lc name availability zones zone health check type elb health check period tags name webapp e environment e desired capacity desired capacity asg properties desiredcapacity default broken due to bug in ansible replace all instances yes replace batch size replace batch size asg properties desiredcapacity default round ceil int broken due to bug in ansible min size max size load balancers webapp e state present wait timeout register asg result name configure scaling policies scaling policy region region name item name asg name webapp e state present adjustment type item adjustment type min adjustment step item min adjustment step scaling adjustment item scaling adjustment cooldown item cooldown with items name increase group size adjustment type changeincapacity scaling adjustment min adjustment step cooldown name decrease group size adjustment type changeincapacity scaling adjustment min adjustment step cooldown register sp result name determine metric alarm configuration set fact metric alarms name asg name scaleup comparison threshold alarm actions sp result results arn name asg name scaledown comparison threshold alarm actions sp result results arn name configure metric alarms and link to scaling policies metric alarm region region name item name state present metric cpuutilization namespace aws statistic average comparison item comparison threshold item threshold period evaluation periods unit percent dimensions autoscalinggroupname webapp e alarm actions item alarm actions with items metric alarms when max size register ma result expected results i expect results that look like the bottom line in the table instead i get results like the top line results actual results i expect results that look like the bottom line in the table instead i get results like the top line results ,1 929,4642761981.0,IssuesEvent,2016-09-30 10:53:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"win_feature always returns ""Failed to add feature"" on Windows Server 2016",affects_2.1 bug_report waiting_on_maintainer windows,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME * `win_feature` ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = SNIPPED/ansible/ansible.cfg configured module search path = ['library'] ``` ##### CONFIGURATION * n/a ##### OS / ENVIRONMENT * Host: Mac OS X El Capitan 10.11.6 * Target: Windows Server 2016 Datacenter (RTM Build 14393.rs1_release.160915-0644) ##### SUMMARY `win_feature` module is unable to add features to Windows Server 2016 targets (including trivial use cases such as `Telnet-Client`) ##### STEPS TO REPRODUCE ``` ansible -m win_feature -a 'name=AD-Domain-Services' examplehost ``` (in this case, `examplehost` is a VM running on Virtualbox 5.0.26r108824) ##### EXPECTED RESULTS ``` 127.0.0.1 | SUCCESS => { ""changed"": true, ""exitcode"": ""0"", ""failed"": false, ""feature_result"": [], ""invocation"": { ""module_name"": ""win_feature"" }, ""msg"": ""Happy Happy Joy Joy"", ""restart_needed"": false, ""success"": true } ``` ##### ACTUAL RESULTS ``` Using SNIPPED/ansible/ansible.cfg as config file Loaded callback minimal of type stdout, v2.0 <127.0.0.1> ESTABLISH WINRM CONNECTION FOR USER: vagrant on PORT 55986 TO 127.0.0.1 <127.0.0.1> EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1475181976.06-278942010948362"").FullName | Write-Host -Separator ''; <127.0.0.1> PUT ""/var/folders/15/b3hfwryj5570qt_r8b9q2jpw0000gn/T/tmpAgxz8u"" TO ""C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1"" <127.0.0.1> EXEC Set-StrictMode -Version Latest Try { & 'C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1' } Catch { $_obj = @{ failed = $true } If ($_.Exception.GetType) { $_obj.Add('msg', $_.Exception.Message) } Else { $_obj.Add('msg', $_.ToString()) } If ($_.InvocationInfo.PositionMessage) { $_obj.Add('exception', $_.InvocationInfo.PositionMessage) } ElseIf ($_.ScriptStackTrace) { $_obj.Add('exception', $_.ScriptStackTrace) } Try { $_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json)) } Catch { } Echo $_obj | ConvertTo-Json -Compress -Depth 99 Exit 1 } Finally { Remove-Item ""C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362"" -Force -Recurse -ErrorAction SilentlyContinue } 127.0.0.1 | FAILED! => { ""changed"": false, ""exitcode"": ""Failed"", ""failed"": true, ""feature_result"": [], ""invocation"": { ""module_name"": ""win_feature"" }, ""msg"": ""Failed to add feature"", ""restart_needed"": false, ""success"": false } ```",True,"win_feature always returns ""Failed to add feature"" on Windows Server 2016 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME * `win_feature` ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = SNIPPED/ansible/ansible.cfg configured module search path = ['library'] ``` ##### CONFIGURATION * n/a ##### OS / ENVIRONMENT * Host: Mac OS X El Capitan 10.11.6 * Target: Windows Server 2016 Datacenter (RTM Build 14393.rs1_release.160915-0644) ##### SUMMARY `win_feature` module is unable to add features to Windows Server 2016 targets (including trivial use cases such as `Telnet-Client`) ##### STEPS TO REPRODUCE ``` ansible -m win_feature -a 'name=AD-Domain-Services' examplehost ``` (in this case, `examplehost` is a VM running on Virtualbox 5.0.26r108824) ##### EXPECTED RESULTS ``` 127.0.0.1 | SUCCESS => { ""changed"": true, ""exitcode"": ""0"", ""failed"": false, ""feature_result"": [], ""invocation"": { ""module_name"": ""win_feature"" }, ""msg"": ""Happy Happy Joy Joy"", ""restart_needed"": false, ""success"": true } ``` ##### ACTUAL RESULTS ``` Using SNIPPED/ansible/ansible.cfg as config file Loaded callback minimal of type stdout, v2.0 <127.0.0.1> ESTABLISH WINRM CONNECTION FOR USER: vagrant on PORT 55986 TO 127.0.0.1 <127.0.0.1> EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1475181976.06-278942010948362"").FullName | Write-Host -Separator ''; <127.0.0.1> PUT ""/var/folders/15/b3hfwryj5570qt_r8b9q2jpw0000gn/T/tmpAgxz8u"" TO ""C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1"" <127.0.0.1> EXEC Set-StrictMode -Version Latest Try { & 'C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1' } Catch { $_obj = @{ failed = $true } If ($_.Exception.GetType) { $_obj.Add('msg', $_.Exception.Message) } Else { $_obj.Add('msg', $_.ToString()) } If ($_.InvocationInfo.PositionMessage) { $_obj.Add('exception', $_.InvocationInfo.PositionMessage) } ElseIf ($_.ScriptStackTrace) { $_obj.Add('exception', $_.ScriptStackTrace) } Try { $_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json)) } Catch { } Echo $_obj | ConvertTo-Json -Compress -Depth 99 Exit 1 } Finally { Remove-Item ""C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362"" -Force -Recurse -ErrorAction SilentlyContinue } 127.0.0.1 | FAILED! => { ""changed"": false, ""exitcode"": ""Failed"", ""failed"": true, ""feature_result"": [], ""invocation"": { ""module_name"": ""win_feature"" }, ""msg"": ""Failed to add feature"", ""restart_needed"": false, ""success"": false } ```",1,win feature always returns failed to add feature on windows server issue type bug report component name win feature ansible version ansible config file snipped ansible ansible cfg configured module search path configuration n a os environment host mac os x el capitan target windows server datacenter rtm build release summary win feature module is unable to add features to windows server targets including trivial use cases such as telnet client steps to reproduce ansible m win feature a name ad domain services examplehost in this case examplehost is a vm running on virtualbox expected results success changed true exitcode failed false feature result invocation module name win feature msg happy happy joy joy restart needed false success true actual results using snipped ansible ansible cfg as config file loaded callback minimal of type stdout establish winrm connection for user vagrant on port to exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator put var folders t to c users vagrant appdata local temp ansible tmp win feature exec set strictmode version latest try c users vagrant appdata local temp ansible tmp win feature catch obj failed true if exception gettype obj add msg exception message else obj add msg tostring if invocationinfo positionmessage obj add exception invocationinfo positionmessage elseif scriptstacktrace obj add exception scriptstacktrace try obj add error record convertto json convertfrom json catch echo obj convertto json compress depth exit finally remove item c users vagrant appdata local temp ansible tmp force recurse erroraction silentlycontinue failed changed false exitcode failed failed true feature result invocation module name win feature msg failed to add feature restart needed false success false ,1 914,4600206779.0,IssuesEvent,2016-09-22 03:21:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,os_image module does not delete image,affects_2.1 bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_image ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT OSX 10.11.13 ##### SUMMARY When trying to use Ansible to delete an OpenStack image, an error saying ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" for auth values is specified. ##### STEPS TO REPRODUCE ``` --- - name: Delete the image. hosts: localhost tasks: - name: Delete the old OpenStack image version os_image: auth: auth_url: http://my_openstack_server:5000/v3 password: mypassword project_name: myproject username: admin name: CoreOS-certified-Docker state: absent ``` ##### EXPECTED RESULTS At least, a connection to OpenStack ##### ACTUAL RESULTS The following error is displayed: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""extra_data"": null, ""failed"": true, ""invocation"": {""module_args"": {""api_timeout"": null, ""auth"": {""auth_url"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""project_name"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER""}, ""auth_type"": null, ""availability_zone"": null, ""cacert"": null, ""cert"": null, ""cloud"": null, ""container_format"": ""bare"", ""disk_format"": ""qcow2"", ""endpoint_type"": ""public"", ""filename"": null, ""is_public"": false, ""kernel"": null, ""key"": null, ""min_disk"": 0, ""min_ram"": 0, ""name"": ""CoreOS-certified-Docker"", ""owner"": null, ""properties"": {}, ""ramdisk"": null, ""region_name"": null, ""state"": ""absent"", ""timeout"": 180, ""verify"": true, ""wait"": true}, ""module_name"": ""os_image""}, ""msg"": ""Error fetching image list: Could not determine a suitable URL for the plugin""} ```",True,"os_image module does not delete image - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_image ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT OSX 10.11.13 ##### SUMMARY When trying to use Ansible to delete an OpenStack image, an error saying ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" for auth values is specified. ##### STEPS TO REPRODUCE ``` --- - name: Delete the image. hosts: localhost tasks: - name: Delete the old OpenStack image version os_image: auth: auth_url: http://my_openstack_server:5000/v3 password: mypassword project_name: myproject username: admin name: CoreOS-certified-Docker state: absent ``` ##### EXPECTED RESULTS At least, a connection to OpenStack ##### ACTUAL RESULTS The following error is displayed: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""extra_data"": null, ""failed"": true, ""invocation"": {""module_args"": {""api_timeout"": null, ""auth"": {""auth_url"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""project_name"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER""}, ""auth_type"": null, ""availability_zone"": null, ""cacert"": null, ""cert"": null, ""cloud"": null, ""container_format"": ""bare"", ""disk_format"": ""qcow2"", ""endpoint_type"": ""public"", ""filename"": null, ""is_public"": false, ""kernel"": null, ""key"": null, ""min_disk"": 0, ""min_ram"": 0, ""name"": ""CoreOS-certified-Docker"", ""owner"": null, ""properties"": {}, ""ramdisk"": null, ""region_name"": null, ""state"": ""absent"", ""timeout"": 180, ""verify"": true, ""wait"": true}, ""module_name"": ""os_image""}, ""msg"": ""Error fetching image list: Could not determine a suitable URL for the plugin""} ```",1,os image module does not delete image issue type bug report component name os image ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific osx summary when trying to use ansible to delete an openstack image an error saying value specified in no log parameter for auth values is specified steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name delete the image hosts localhost tasks name delete the old openstack image version os image auth auth url password mypassword project name myproject username admin name coreos certified docker state absent expected results at least a connection to openstack actual results the following error is displayed fatal failed changed false extra data null failed true invocation module args api timeout null auth auth url value specified in no log parameter password value specified in no log parameter project name value specified in no log parameter username value specified in no log parameter auth type null availability zone null cacert null cert null cloud null container format bare disk format endpoint type public filename null is public false kernel null key null min disk min ram name coreos certified docker owner null properties ramdisk null region name null state absent timeout verify true wait true module name os image msg error fetching image list could not determine a suitable url for the plugin ,1 1122,4990293427.0,IssuesEvent,2016-12-08 14:42:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Git module has different behaviors when cloning fresh and updating git repository (unable to update properly),affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Git ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT Fedora 24 ##### SUMMARY Seeing different (unexpected) behavior when trying to update an existing repository. - If a repository is cloned ""fresh"" with a refspec and FETCH_HEAD (i.e, a Gerrit patchset review), it will work - If a repository is first cloned from master branch and then updated with the refspec and FETCH_HEAD, it will not work ##### STEPS TO REPRODUCE Full reproducer gist: https://gist.github.com/dmsimard/be06e4269ab094952db18da88c9ba70f tl;dr: ``` # Bash works git clone https://git.openstack.org/openstack/puppet-openstack-integration; cd puppet-openstack-integration git fetch https://git.openstack.org/openstack/puppet-openstack-integration refs/changes/60/337860/26 && git checkout FETCH_HEAD # This works ansible -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" # This doesn't work ansible -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration"" ansible -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" ``` ##### EXPECTED RESULTS Expecting Ansible to be able to fetch a refspec for a repository that has already been cloned and checkout the FETCH_HEAD reference. ##### ACTUAL RESULTS ``` # This works (openstack)┬─[dmsimard@hostname:/tmp]─[03:28:51 PM] ╰─>$ rm -rf puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp]─[03:28:57 PM] ╰─>$ ansible -vvv -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: dmsimard EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450 `"" && echo ansible-tmp-1471290547.98-200796271054450=""` echo $HOME/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450 `"" ) && sleep 0' PUT /tmp/tmpeHsdHm TO /home/dmsimard/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450/git EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/dmsimard/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450/git; rm -rf ""/home/dmsimard/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450/"" > /dev/null 2>&1 && sleep 0' localhost | SUCCESS => { ""after"": ""a9b53b7acfc0146c931ea95cb2168d165f5edbbd"", ""before"": null, ""changed"": true, ""invocation"": { ""module_args"": { ""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp/puppet-openstack-integration"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": ""refs/changes/60/337860/26"", ""remote"": ""origin"", ""repo"": ""https://git.openstack.org/openstack/puppet-openstack-integration"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""FETCH_HEAD"" }, ""module_name"": ""git"" }, ""warnings"": [] } (openstack)┬─[dmsimard@hostname:/tmp]─[03:49:11 PM] ╰─>$ cd puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp/puppet-openstack-integration]─[03:49:26 PM] ╰─>$ git log --pretty=format:""%h%x09%an%x09%ad%x09%s"" -n2 a9b53b7 David Moreau-Simard Tue Jul 5 16:45:23 2016 -0400 Add designate test coverage to scenario003 f8aa97d Jenkins Mon Aug 8 11:34:27 2016 +0000 Merge ""In-process token caching is deprecated, use memcached instead"" # This doesn't (openstack)┬─[dmsimard@hostname:/tmp]─[03:50:21 PM] ╰─>$ rm -rf puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp]─[03:50:58 PM] ╰─>$ ansible -vvv -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration"" No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: dmsimard EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449 `"" && echo ansible-tmp-1471290721.44-129257258043449=""` echo $HOME/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449 `"" ) && sleep 0' PUT /tmp/tmpn03XKS TO /home/dmsimard/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449/git EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/dmsimard/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449/git; rm -rf ""/home/dmsimard/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449/"" > /dev/null 2>&1 && sleep 0' localhost | SUCCESS => { ""after"": ""da748ad437bc1f3165929b2b69208f7c58e62699"", ""before"": null, ""changed"": true, ""invocation"": { ""module_args"": { ""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp/puppet-openstack-integration"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": null, ""remote"": ""origin"", ""repo"": ""https://git.openstack.org/openstack/puppet-openstack-integration"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""HEAD"" }, ""module_name"": ""git"" }, ""warnings"": [] } (openstack)┬─[dmsimard@hostname:/tmp]─[03:52:03 PM] ╰─>$ ansible -vvv -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: dmsimard EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036 `"" && echo ansible-tmp-1471290728.81-173161814030036=""` echo $HOME/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036 `"" ) && sleep 0' PUT /tmp/tmpq7cdDu TO /home/dmsimard/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036/git EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/dmsimard/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036/git; rm -rf ""/home/dmsimard/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036/"" > /dev/null 2>&1 && sleep 0' localhost | SUCCESS => { ""after"": ""c2b8906f4779418a6192f4339d79a297b97f328d"", ""before"": ""da748ad437bc1f3165929b2b69208f7c58e62699"", ""changed"": true, ""invocation"": { ""module_args"": { ""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp/puppet-openstack-integration"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": ""refs/changes/60/337860/26"", ""remote"": ""origin"", ""repo"": ""https://git.openstack.org/openstack/puppet-openstack-integration"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""FETCH_HEAD"" }, ""module_name"": ""git"" }, ""warnings"": [] } (openstack)┬─[dmsimard@hostname:/tmp]─[03:52:11 PM] ╰─>$ cd puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp/puppet-openstack-integration]─[03:52:16 PM] ╰─>$ git log --pretty=format:""%h%x09%an%x09%ad%x09%s"" -n2 c2b8906 Jenkins Mon Nov 2 15:44:19 2015 +0000 Merge ""puppetfile: bump corosync to 0.8.0"" into stable/kilo 4341f5d Jenkins Mon Nov 2 15:44:01 2015 +0000 Merge ""puppetfile: Added corosync module"" into stable/kilo ``` ",True,"Git module has different behaviors when cloning fresh and updating git repository (unable to update properly) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Git ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT Fedora 24 ##### SUMMARY Seeing different (unexpected) behavior when trying to update an existing repository. - If a repository is cloned ""fresh"" with a refspec and FETCH_HEAD (i.e, a Gerrit patchset review), it will work - If a repository is first cloned from master branch and then updated with the refspec and FETCH_HEAD, it will not work ##### STEPS TO REPRODUCE Full reproducer gist: https://gist.github.com/dmsimard/be06e4269ab094952db18da88c9ba70f tl;dr: ``` # Bash works git clone https://git.openstack.org/openstack/puppet-openstack-integration; cd puppet-openstack-integration git fetch https://git.openstack.org/openstack/puppet-openstack-integration refs/changes/60/337860/26 && git checkout FETCH_HEAD # This works ansible -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" # This doesn't work ansible -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration"" ansible -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" ``` ##### EXPECTED RESULTS Expecting Ansible to be able to fetch a refspec for a repository that has already been cloned and checkout the FETCH_HEAD reference. ##### ACTUAL RESULTS ``` # This works (openstack)┬─[dmsimard@hostname:/tmp]─[03:28:51 PM] ╰─>$ rm -rf puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp]─[03:28:57 PM] ╰─>$ ansible -vvv -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: dmsimard EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450 `"" && echo ansible-tmp-1471290547.98-200796271054450=""` echo $HOME/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450 `"" ) && sleep 0' PUT /tmp/tmpeHsdHm TO /home/dmsimard/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450/git EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/dmsimard/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450/git; rm -rf ""/home/dmsimard/.ansible/tmp/ansible-tmp-1471290547.98-200796271054450/"" > /dev/null 2>&1 && sleep 0' localhost | SUCCESS => { ""after"": ""a9b53b7acfc0146c931ea95cb2168d165f5edbbd"", ""before"": null, ""changed"": true, ""invocation"": { ""module_args"": { ""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp/puppet-openstack-integration"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": ""refs/changes/60/337860/26"", ""remote"": ""origin"", ""repo"": ""https://git.openstack.org/openstack/puppet-openstack-integration"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""FETCH_HEAD"" }, ""module_name"": ""git"" }, ""warnings"": [] } (openstack)┬─[dmsimard@hostname:/tmp]─[03:49:11 PM] ╰─>$ cd puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp/puppet-openstack-integration]─[03:49:26 PM] ╰─>$ git log --pretty=format:""%h%x09%an%x09%ad%x09%s"" -n2 a9b53b7 David Moreau-Simard Tue Jul 5 16:45:23 2016 -0400 Add designate test coverage to scenario003 f8aa97d Jenkins Mon Aug 8 11:34:27 2016 +0000 Merge ""In-process token caching is deprecated, use memcached instead"" # This doesn't (openstack)┬─[dmsimard@hostname:/tmp]─[03:50:21 PM] ╰─>$ rm -rf puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp]─[03:50:58 PM] ╰─>$ ansible -vvv -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration"" No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: dmsimard EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449 `"" && echo ansible-tmp-1471290721.44-129257258043449=""` echo $HOME/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449 `"" ) && sleep 0' PUT /tmp/tmpn03XKS TO /home/dmsimard/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449/git EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/dmsimard/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449/git; rm -rf ""/home/dmsimard/.ansible/tmp/ansible-tmp-1471290721.44-129257258043449/"" > /dev/null 2>&1 && sleep 0' localhost | SUCCESS => { ""after"": ""da748ad437bc1f3165929b2b69208f7c58e62699"", ""before"": null, ""changed"": true, ""invocation"": { ""module_args"": { ""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp/puppet-openstack-integration"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": null, ""remote"": ""origin"", ""repo"": ""https://git.openstack.org/openstack/puppet-openstack-integration"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""HEAD"" }, ""module_name"": ""git"" }, ""warnings"": [] } (openstack)┬─[dmsimard@hostname:/tmp]─[03:52:03 PM] ╰─>$ ansible -vvv -i hosts localhost -m git -a ""repo=https://git.openstack.org/openstack/puppet-openstack-integration dest=/tmp/puppet-openstack-integration refspec=refs/changes/60/337860/26 version=FETCH_HEAD"" No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: dmsimard EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036 `"" && echo ansible-tmp-1471290728.81-173161814030036=""` echo $HOME/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036 `"" ) && sleep 0' PUT /tmp/tmpq7cdDu TO /home/dmsimard/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036/git EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/dmsimard/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036/git; rm -rf ""/home/dmsimard/.ansible/tmp/ansible-tmp-1471290728.81-173161814030036/"" > /dev/null 2>&1 && sleep 0' localhost | SUCCESS => { ""after"": ""c2b8906f4779418a6192f4339d79a297b97f328d"", ""before"": ""da748ad437bc1f3165929b2b69208f7c58e62699"", ""changed"": true, ""invocation"": { ""module_args"": { ""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp/puppet-openstack-integration"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": ""refs/changes/60/337860/26"", ""remote"": ""origin"", ""repo"": ""https://git.openstack.org/openstack/puppet-openstack-integration"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""FETCH_HEAD"" }, ""module_name"": ""git"" }, ""warnings"": [] } (openstack)┬─[dmsimard@hostname:/tmp]─[03:52:11 PM] ╰─>$ cd puppet-openstack-integration/ (openstack)┬─[dmsimard@hostname:/tmp/puppet-openstack-integration]─[03:52:16 PM] ╰─>$ git log --pretty=format:""%h%x09%an%x09%ad%x09%s"" -n2 c2b8906 Jenkins Mon Nov 2 15:44:19 2015 +0000 Merge ""puppetfile: bump corosync to 0.8.0"" into stable/kilo 4341f5d Jenkins Mon Nov 2 15:44:01 2015 +0000 Merge ""puppetfile: Added corosync module"" into stable/kilo ``` ",1,git module has different behaviors when cloning fresh and updating git repository unable to update properly issue type bug report component name git ansible version ansible config file configured module search path default w o overrides configuration n a os environment fedora summary seeing different unexpected behavior when trying to update an existing repository if a repository is cloned fresh with a refspec and fetch head i e a gerrit patchset review it will work if a repository is first cloned from master branch and then updated with the refspec and fetch head it will not work steps to reproduce full reproducer gist tl dr bash works git clone cd puppet openstack integration git fetch refs changes git checkout fetch head this works ansible i hosts localhost m git a repo dest tmp puppet openstack integration refspec refs changes version fetch head this doesn t work ansible i hosts localhost m git a repo dest tmp puppet openstack integration ansible i hosts localhost m git a repo dest tmp puppet openstack integration refspec refs changes version fetch head expected results expecting ansible to be able to fetch a refspec for a repository that has already been cloned and checkout the fetch head reference actual results this works openstack ┬─ ─ ╰─ rm rf puppet openstack integration openstack ┬─ ─ ╰─ ansible vvv i hosts localhost m git a repo dest tmp puppet openstack integration refspec refs changes version fetch head no config file found using defaults establish local connection for user dmsimard exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpehsdhm to home dmsimard ansible tmp ansible tmp git exec bin sh c lang en ca utf lc all en ca utf lc messages en ca utf usr bin python home dmsimard ansible tmp ansible tmp git rm rf home dmsimard ansible tmp ansible tmp dev null sleep localhost success after before null changed true invocation module args accept hostkey false bare false clone true depth null dest tmp puppet openstack integration executable null force false key file null recursive true reference null refspec refs changes remote origin repo ssh opts null track submodules false update true verify commit false version fetch head module name git warnings openstack ┬─ ─ ╰─ cd puppet openstack integration openstack ┬─ ─ ╰─ git log pretty format h an ad s david moreau simard tue jul add designate test coverage to jenkins mon aug merge in process token caching is deprecated use memcached instead this doesn t openstack ┬─ ─ ╰─ rm rf puppet openstack integration openstack ┬─ ─ ╰─ ansible vvv i hosts localhost m git a repo dest tmp puppet openstack integration no config file found using defaults establish local connection for user dmsimard exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home dmsimard ansible tmp ansible tmp git exec bin sh c lang en ca utf lc all en ca utf lc messages en ca utf usr bin python home dmsimard ansible tmp ansible tmp git rm rf home dmsimard ansible tmp ansible tmp dev null sleep localhost success after before null changed true invocation module args accept hostkey false bare false clone true depth null dest tmp puppet openstack integration executable null force false key file null recursive true reference null refspec null remote origin repo ssh opts null track submodules false update true verify commit false version head module name git warnings openstack ┬─ ─ ╰─ ansible vvv i hosts localhost m git a repo dest tmp puppet openstack integration refspec refs changes version fetch head no config file found using defaults establish local connection for user dmsimard exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home dmsimard ansible tmp ansible tmp git exec bin sh c lang en ca utf lc all en ca utf lc messages en ca utf usr bin python home dmsimard ansible tmp ansible tmp git rm rf home dmsimard ansible tmp ansible tmp dev null sleep localhost success after before changed true invocation module args accept hostkey false bare false clone true depth null dest tmp puppet openstack integration executable null force false key file null recursive true reference null refspec refs changes remote origin repo ssh opts null track submodules false update true verify commit false version fetch head module name git warnings openstack ┬─ ─ ╰─ cd puppet openstack integration openstack ┬─ ─ ╰─ git log pretty format h an ad s jenkins mon nov merge puppetfile bump corosync to into stable kilo jenkins mon nov merge puppetfile added corosync module into stable kilo ,1 1835,6577363973.0,IssuesEvent,2017-09-12 00:23:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,user module fails on SLES11 SP1-SP3,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME user ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Source: N/A, happens on RHEL, OSX Target: SLES11 SP1-SP3 ##### SUMMARY user module fails when user does not exist but group does. ##### STEPS TO REPRODUCE ``` - name: configure usergroup group: name: usergroup gid: 60003 state: present - name: configure user account user: name: user shell: /bin/bash skeleton: /etc/skel password: ""{{ password }}"" groups: usergroup append: true no_log: true ``` ##### EXPECTED RESULTS User is created and added to the indicated group ##### ACTUAL RESULTS ``` fatal: [targethost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""sysadm"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""user"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": ""/etc/skel"", ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on sonata"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": ""60004"", ""update_password"": ""always""}, ""module_name"": ""user""}, ""msg"": ""/usr/sbin/useradd: invalid option -- 'N'\nTry `useradd --help' or `useradd --usage' for more information.\n"", ""name"": ""sysadm"", ""rc"": 2} ``` see: https://github.com/ansible/ansible-modules-core/blob/76b7de943b065a831fe8639aa0348ebceee1ae02/system/user.py#L345 looks like ansible defaults to appending -N to the useradd command when the system is not redhat, but suse11 sp1-sp3 do not support the -N flag ",True,"user module fails on SLES11 SP1-SP3 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME user ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Source: N/A, happens on RHEL, OSX Target: SLES11 SP1-SP3 ##### SUMMARY user module fails when user does not exist but group does. ##### STEPS TO REPRODUCE ``` - name: configure usergroup group: name: usergroup gid: 60003 state: present - name: configure user account user: name: user shell: /bin/bash skeleton: /etc/skel password: ""{{ password }}"" groups: usergroup append: true no_log: true ``` ##### EXPECTED RESULTS User is created and added to the indicated group ##### ACTUAL RESULTS ``` fatal: [targethost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""sysadm"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""user"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": ""/etc/skel"", ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on sonata"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": ""60004"", ""update_password"": ""always""}, ""module_name"": ""user""}, ""msg"": ""/usr/sbin/useradd: invalid option -- 'N'\nTry `useradd --help' or `useradd --usage' for more information.\n"", ""name"": ""sysadm"", ""rc"": 2} ``` see: https://github.com/ansible/ansible-modules-core/blob/76b7de943b065a831fe8639aa0348ebceee1ae02/system/user.py#L345 looks like ansible defaults to appending -N to the useradd command when the system is not redhat, but suse11 sp1-sp3 do not support the -N flag ",1,user module fails on issue type bug report component name user ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration deprecation warnings false os environment source n a happens on rhel osx target summary user module fails when user does not exist but group does steps to reproduce name configure usergroup group name usergroup gid state present name configure user account user name user shell bin bash skeleton etc skel password password groups usergroup append true no log true expected results user is created and added to the indicated group actual results fatal failed changed false failed true invocation module args append true comment null createhome true expires null force false generate ssh key null group null groups sysadm home null login class null move home false name user non unique false password value specified in no log parameter remove false shell bin bash skeleton etc skel ssh key bits ssh key comment ansible generated on sonata ssh key file null ssh key passphrase null ssh key type rsa state present system false uid update password always module name user msg usr sbin useradd invalid option n ntry useradd help or useradd usage for more information n name sysadm rc see looks like ansible defaults to appending n to the useradd command when the system is not redhat but do not support the n flag ,1 786,4389629573.0,IssuesEvent,2016-08-08 22:52:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Feature Idea: Amazon AWS IAM module, set password policy",aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: IAM policy plugin ##### Ansible Version: ``` ansible 2.0.1.0 ``` ##### Ansible Configuration: ##### Environment: Using: Ubuntu 15.10 Wily Werewolf ##### Summary: I would like to set the Amazon AWS IAM password policy, using Ansible. http://docs.aws.amazon.com/cli/latest/reference/iam/update-account-password-policy.html ##### Steps To Reproduce: ##### Expected Results: Amazon AWS IAM password policy will be updated. ##### Actual Results: ",True,"Feature Idea: Amazon AWS IAM module, set password policy - ##### Issue Type: - Feature Idea ##### Plugin Name: IAM policy plugin ##### Ansible Version: ``` ansible 2.0.1.0 ``` ##### Ansible Configuration: ##### Environment: Using: Ubuntu 15.10 Wily Werewolf ##### Summary: I would like to set the Amazon AWS IAM password policy, using Ansible. http://docs.aws.amazon.com/cli/latest/reference/iam/update-account-password-policy.html ##### Steps To Reproduce: ##### Expected Results: Amazon AWS IAM password policy will be updated. ##### Actual Results: ",1,feature idea amazon aws iam module set password policy issue type feature idea plugin name iam policy plugin ansible version ansible ansible configuration environment using ubuntu wily werewolf summary i would like to set the amazon aws iam password policy using ansible steps to reproduce expected results amazon aws iam password policy will be updated actual results ,1 1821,6577329571.0,IssuesEvent,2017-09-12 00:09:03,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,rds_param_group unable to set long_query_time for MySQL,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME rds_param_group ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = ~/ansible.cfg configured module search path = Default w/o overrides boto (2.40.0) ``` ##### CONFIGURATION [defaults] ansible_managed = Ansible Managed display_skipped_hosts = False forks = 50 gathering = explicit host_key_checking = False nocows = 1 retry_files_enabled = False lookup_plugins = ./lookup_plugins ##### OS / ENVIRONMENT From: Mac OSX El Capitan To: N/A (AWS) ##### SUMMARY I am unable to set the long_query_time rds parameter group setting using the rds_param_grroup module. I have been able to successfully set many other variables without issue. The only thing I see different with this parameter is that it is a float data type. I have tried using the following values without success, ""5"", ""5.000000"", ""5.0"". The value reported for this variable from MySQL is ""5.000000"". From the error below, I believe this may be an issue with boto. I found someone reporting the same issue in the Google Group back in January 2015 and it said it was an issue with boto: https://groups.google.com/forum/#!msg/ansible-project/iN7cFi2aw98/w1oc2SEFKUcJ. No solution was posted. ##### STEPS TO REPRODUCE Create a RDS parameter group for MySQL and try to set the long_query_time parameter. ``` rds_parameter_groups: nw_mysql56_default: description: ""default rds parameter group for - mysql5.6"" engine: mysql5.6 immediate: yes name: nw-mysql56-default params: connect_timeout: 100 general_log: 0 innodb_flush_log_at_trx_commit: 2 log_output: FILE slow_query_log: 1 long_query_time: 5.000000 wait_timeout: 180000 state: present - name: create rds parameter groups rds_param_group: description: ""{{ item.value.description }}"" engine: ""{{ item.value.engine }}"" immediate: ""{{ item.value.immediate }}"" name: ""{{ item.value.name }}"" params: ""{{ item.value.params|to_json }}"" state: ""{{ item.value.state }}"" with_dict: ""{{ rds_parameter_groups }}"" ``` ##### EXPECTED RESULTS The long_query_time parameter should be set to the value specified. ##### ACTUAL RESULTS The module reports the error below: ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: value must be in 0-31536000 failed: [localhost] (item={'value': {u'engine': u'mysql5.6', u'description': u'default rds parameter group for - mysql5.6', u'immediate': True, u'state': u'present', u'params': {u'general_log': 0, u'slow_query_log': 1, u'connect_timeout': 100, u'wait_timeout': 180000, u'log_output': u'FILE', u'long_query_time': u'5.000000', u'innodb_flush_log_at_trx_commit': 2}, u'name': u'nw-mysql56-default'}, 'key': u'nw_mysql56_default'}) => {""failed"": true, ""item"": {""key"": ""nw_mysql56_default"", ""value"": {""description"": ""default rds parameter group for - mysql5.6"", ""engine"": ""mysql5.6"", ""immediate"": true, ""name"": ""nw-mysql56-default"", ""params"": {""connect_timeout"": 100, ""general_log"": 0, ""innodb_flush_log_at_trx_commit"": 2, ""log_output"": ""FILE"", ""long_query_time"": ""5.000000"", ""slow_query_log"": 1, ""wait_timeout"": 180000}, ""state"": ""present""}}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 2506, in \n main()\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 277, in main\n changed_params, group_params = modify_group(next_group, group_params, immediate)\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 198, in modify_group\n set_parameter(param, new_value, immediate)\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 167, in set_parameter\n param.value = converted_value\n File \""/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\"", line 169, in set_value\n self._set_string_value(value)\n File \""/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\"", line 141, in _set_string_value\n raise ValueError('value must be in %s' % self.allowed_values)\nValueError: value must be in 0-31536000\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"rds_param_group unable to set long_query_time for MySQL - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME rds_param_group ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = ~/ansible.cfg configured module search path = Default w/o overrides boto (2.40.0) ``` ##### CONFIGURATION [defaults] ansible_managed = Ansible Managed display_skipped_hosts = False forks = 50 gathering = explicit host_key_checking = False nocows = 1 retry_files_enabled = False lookup_plugins = ./lookup_plugins ##### OS / ENVIRONMENT From: Mac OSX El Capitan To: N/A (AWS) ##### SUMMARY I am unable to set the long_query_time rds parameter group setting using the rds_param_grroup module. I have been able to successfully set many other variables without issue. The only thing I see different with this parameter is that it is a float data type. I have tried using the following values without success, ""5"", ""5.000000"", ""5.0"". The value reported for this variable from MySQL is ""5.000000"". From the error below, I believe this may be an issue with boto. I found someone reporting the same issue in the Google Group back in January 2015 and it said it was an issue with boto: https://groups.google.com/forum/#!msg/ansible-project/iN7cFi2aw98/w1oc2SEFKUcJ. No solution was posted. ##### STEPS TO REPRODUCE Create a RDS parameter group for MySQL and try to set the long_query_time parameter. ``` rds_parameter_groups: nw_mysql56_default: description: ""default rds parameter group for - mysql5.6"" engine: mysql5.6 immediate: yes name: nw-mysql56-default params: connect_timeout: 100 general_log: 0 innodb_flush_log_at_trx_commit: 2 log_output: FILE slow_query_log: 1 long_query_time: 5.000000 wait_timeout: 180000 state: present - name: create rds parameter groups rds_param_group: description: ""{{ item.value.description }}"" engine: ""{{ item.value.engine }}"" immediate: ""{{ item.value.immediate }}"" name: ""{{ item.value.name }}"" params: ""{{ item.value.params|to_json }}"" state: ""{{ item.value.state }}"" with_dict: ""{{ rds_parameter_groups }}"" ``` ##### EXPECTED RESULTS The long_query_time parameter should be set to the value specified. ##### ACTUAL RESULTS The module reports the error below: ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: value must be in 0-31536000 failed: [localhost] (item={'value': {u'engine': u'mysql5.6', u'description': u'default rds parameter group for - mysql5.6', u'immediate': True, u'state': u'present', u'params': {u'general_log': 0, u'slow_query_log': 1, u'connect_timeout': 100, u'wait_timeout': 180000, u'log_output': u'FILE', u'long_query_time': u'5.000000', u'innodb_flush_log_at_trx_commit': 2}, u'name': u'nw-mysql56-default'}, 'key': u'nw_mysql56_default'}) => {""failed"": true, ""item"": {""key"": ""nw_mysql56_default"", ""value"": {""description"": ""default rds parameter group for - mysql5.6"", ""engine"": ""mysql5.6"", ""immediate"": true, ""name"": ""nw-mysql56-default"", ""params"": {""connect_timeout"": 100, ""general_log"": 0, ""innodb_flush_log_at_trx_commit"": 2, ""log_output"": ""FILE"", ""long_query_time"": ""5.000000"", ""slow_query_log"": 1, ""wait_timeout"": 180000}, ""state"": ""present""}}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 2506, in \n main()\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 277, in main\n changed_params, group_params = modify_group(next_group, group_params, immediate)\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 198, in modify_group\n set_parameter(param, new_value, immediate)\n File \""/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\"", line 167, in set_parameter\n param.value = converted_value\n File \""/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\"", line 169, in set_value\n self._set_string_value(value)\n File \""/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\"", line 141, in _set_string_value\n raise ValueError('value must be in %s' % self.allowed_values)\nValueError: value must be in 0-31536000\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,rds param group unable to set long query time for mysql issue type bug report component name rds param group ansible version ansible config file ansible cfg configured module search path default w o overrides boto configuration ansible managed ansible managed display skipped hosts false forks gathering explicit host key checking false nocows retry files enabled false lookup plugins lookup plugins os environment from mac osx el capitan to n a aws summary i am unable to set the long query time rds parameter group setting using the rds param grroup module i have been able to successfully set many other variables without issue the only thing i see different with this parameter is that it is a float data type i have tried using the following values without success the value reported for this variable from mysql is from the error below i believe this may be an issue with boto i found someone reporting the same issue in the google group back in january and it said it was an issue with boto no solution was posted steps to reproduce create a rds parameter group for mysql and try to set the long query time parameter rds parameter groups nw default description default rds parameter group for engine immediate yes name nw default params connect timeout general log innodb flush log at trx commit log output file slow query log long query time wait timeout state present name create rds parameter groups rds param group description item value description engine item value engine immediate item value immediate name item value name params item value params to json state item value state with dict rds parameter groups expected results the long query time parameter should be set to the value specified actual results the module reports the error below an exception occurred during task execution to see the full traceback use vvv the error was valueerror value must be in failed item value u engine u u description u default rds parameter group for u immediate true u state u present u params u general log u slow query log u connect timeout u wait timeout u log output u file u long query time u u innodb flush log at trx commit u name u nw default key u nw default failed true item key nw default value description default rds parameter group for engine immediate true name nw default params connect timeout general log innodb flush log at trx commit log output file long query time slow query log wait timeout state present module stderr traceback most recent call last n file users jeremy ansible tmp ansible tmp rds param group line in n main n file users jeremy ansible tmp ansible tmp rds param group line in main n changed params group params modify group next group group params immediate n file users jeremy ansible tmp ansible tmp rds param group line in modify group n set parameter param new value immediate n file users jeremy ansible tmp ansible tmp rds param group line in set parameter n param value converted value n file usr local lib site packages boto rds parametergroup py line in set value n self set string value value n file usr local lib site packages boto rds parametergroup py line in set string value n raise valueerror value must be in s self allowed values nvalueerror value must be in n module stdout msg module failure parsed false ,1 947,4681890443.0,IssuesEvent,2016-10-09 00:55:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Bug Report: Fetch fails if ansible_ssh_host is localhost,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME fetch ##### ANSIBLE VERSION 2.1.0.0 ##### CONFIGURATION These are the most relevant config items though I don't know that there is a correlation: ssh_args = -o ControlMaster=auto -o ControlPersist=60s ##### OS / ENVIRONMENT N/A ##### SUMMARY When ansible_ssh_host is set to localhost, fetch says it succeeds, but it never gets the file. It's important that localhost works the same as every other host value for testing purposes. ##### STEPS TO REPRODUCE Use the play below with ansible_ssh_host set to localhost - hosts: '{{ hosts }}' gather_facts: False tasks: - fetch: src: /tmp/remote_file dest: /tmp/local_file flat: true fail_on_missing: true ##### EXPECTED RESULTS I would expect the behavior to be the same for localhost as it is for every other host. ##### ACTUAL RESULTS Fetch says it succeeds, but verbose output actually shows this error (doesn't matter if the file exists or not). ok: [localhost] => {""changed"": false, ""file"": ""/tmp/remote_file"", ""invocation"": {""module_args"": {""dest"": ""/tmp/local_file"", ""fail_on_missing"": true, ""flat"": true, ""src"": ""/tmp/remote_file""}, ""module_name"": ""fetch""}, ""msg"": ""unable to calculate the checksum of the remote file""} ",True,"Bug Report: Fetch fails if ansible_ssh_host is localhost - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME fetch ##### ANSIBLE VERSION 2.1.0.0 ##### CONFIGURATION These are the most relevant config items though I don't know that there is a correlation: ssh_args = -o ControlMaster=auto -o ControlPersist=60s ##### OS / ENVIRONMENT N/A ##### SUMMARY When ansible_ssh_host is set to localhost, fetch says it succeeds, but it never gets the file. It's important that localhost works the same as every other host value for testing purposes. ##### STEPS TO REPRODUCE Use the play below with ansible_ssh_host set to localhost - hosts: '{{ hosts }}' gather_facts: False tasks: - fetch: src: /tmp/remote_file dest: /tmp/local_file flat: true fail_on_missing: true ##### EXPECTED RESULTS I would expect the behavior to be the same for localhost as it is for every other host. ##### ACTUAL RESULTS Fetch says it succeeds, but verbose output actually shows this error (doesn't matter if the file exists or not). ok: [localhost] => {""changed"": false, ""file"": ""/tmp/remote_file"", ""invocation"": {""module_args"": {""dest"": ""/tmp/local_file"", ""fail_on_missing"": true, ""flat"": true, ""src"": ""/tmp/remote_file""}, ""module_name"": ""fetch""}, ""msg"": ""unable to calculate the checksum of the remote file""} ",1,bug report fetch fails if ansible ssh host is localhost issue type bug report component name fetch ansible version configuration these are the most relevant config items though i don t know that there is a correlation ssh args o controlmaster auto o controlpersist os environment n a summary when ansible ssh host is set to localhost fetch says it succeeds but it never gets the file it s important that localhost works the same as every other host value for testing purposes steps to reproduce use the play below with ansible ssh host set to localhost hosts hosts gather facts false tasks fetch src tmp remote file dest tmp local file flat true fail on missing true expected results i would expect the behavior to be the same for localhost as it is for every other host actual results fetch says it succeeds but verbose output actually shows this error doesn t matter if the file exists or not ok changed false file tmp remote file invocation module args dest tmp local file fail on missing true flat true src tmp remote file module name fetch msg unable to calculate the checksum of the remote file ,1 1909,6577567771.0,IssuesEvent,2017-09-12 01:48:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,elasticache module not idempotent: Fails if cluster already exists,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: elasticache ##### Ansible Version: ``` ansible 2.1.0 config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: Default ##### Environment: N/A ##### Summary: The elasticache module correctly creates an elasticache cluster, but running the same playbook again fails it if the cluster already exists. ##### Steps To Reproduce: ``` - name: Create Elasticache node elasticache: name: ""my-cluster"" state: present engine: memcached node_type: cache.t2.small num_nodes: 1 cache_port: 11112 wait: no hard_modify: no region: ""us-west-2"" cache_subnet_group: ""my-subnet-group"" security_group_ids: [""sg-1234567""] ``` ##### Expected Results: Running the above repeatedly should succeed with the first execution creating the cluster and subsequent executions seeing that it exists and moving on ##### Actual Results: Running the above twice fails with the error: ``` TASK [Create Elasticache node] ************************************************* fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Cache cluster already exists""} ``` ",True,"elasticache module not idempotent: Fails if cluster already exists - ##### Issue Type: - Bug Report ##### Plugin Name: elasticache ##### Ansible Version: ``` ansible 2.1.0 config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: Default ##### Environment: N/A ##### Summary: The elasticache module correctly creates an elasticache cluster, but running the same playbook again fails it if the cluster already exists. ##### Steps To Reproduce: ``` - name: Create Elasticache node elasticache: name: ""my-cluster"" state: present engine: memcached node_type: cache.t2.small num_nodes: 1 cache_port: 11112 wait: no hard_modify: no region: ""us-west-2"" cache_subnet_group: ""my-subnet-group"" security_group_ids: [""sg-1234567""] ``` ##### Expected Results: Running the above repeatedly should succeed with the first execution creating the cluster and subsequent executions seeing that it exists and moving on ##### Actual Results: Running the above twice fails with the error: ``` TASK [Create Elasticache node] ************************************************* fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Cache cluster already exists""} ``` ",1,elasticache module not idempotent fails if cluster already exists issue type bug report plugin name elasticache ansible version ansible config file configured module search path default w o overrides ansible configuration default environment n a summary the elasticache module correctly creates an elasticache cluster but running the same playbook again fails it if the cluster already exists steps to reproduce for bugs please show exactly how to reproduce the problem for new features show how the feature would be used name create elasticache node elasticache name my cluster state present engine memcached node type cache small num nodes cache port wait no hard modify no region us west cache subnet group my subnet group security group ids expected results running the above repeatedly should succeed with the first execution creating the cluster and subsequent executions seeing that it exists and moving on actual results running the above twice fails with the error task fatal failed changed false failed true msg cache cluster already exists ,1 755,4351917853.0,IssuesEvent,2016-08-01 02:55:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"os_network should be able to support ""provider"" options ",cloud feature_idea waiting_on_maintainer,"##### ISSUE TYPE Feature Request ##### COMPONENT NAME os_network module ##### ANSIBLE VERSION N/A ##### SUMMARY Currently there is no way of creating a network as follows: ``` neutron net-create public --provider:network_type vlan \ --provider:segmentation_id 10 \ --provider:physical_network datacentre \ --router:external ``` or another example: ``` neutron net-create public --provider:network_type flat --provider:physical_network datacentre \ --router:external ```",True,"os_network should be able to support ""provider"" options - ##### ISSUE TYPE Feature Request ##### COMPONENT NAME os_network module ##### ANSIBLE VERSION N/A ##### SUMMARY Currently there is no way of creating a network as follows: ``` neutron net-create public --provider:network_type vlan \ --provider:segmentation_id 10 \ --provider:physical_network datacentre \ --router:external ``` or another example: ``` neutron net-create public --provider:network_type flat --provider:physical_network datacentre \ --router:external ```",1,os network should be able to support provider options issue type feature request component name os network module ansible version n a summary currently there is no way of creating a network as follows neutron net create public provider network type vlan provider segmentation id provider physical network datacentre router external or another example neutron net create public provider network type flat provider physical network datacentre router external ,1 820,4442284669.0,IssuesEvent,2016-08-19 12:55:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Unarchive fails on MacOS: ""Unexpected error when accessing exploded file: [Errno 2]""",bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Unarchive ##### ANSIBLE VERSION Tested with three versions (devel is using up-to-date checkout of modules-core) ``` ansible 2.2.0 (devel 947877dcce) last updated 2016/08/11 11:34:15 (GMT -400) lib/ansible/modules/core: (devel 23ebb98570) last updated 2016/08/11 11:46:27 (GMT -400) lib/ansible/modules/extras: (detached HEAD 39153ea154) last updated 2016/08/10 23:59:44 (GMT -400) config file = configured module search path = Default w/o overrides ``` Also: ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` *Works correctly with:* ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION default config ##### OS / ENVIRONMENT Mac OS X 10.11.6 ##### SUMMARY This seems to be Mac specific. (tested on/against 10.11.6) Unarchive fails on Mac OS X when playbook is run locally or when targeting a remote Mac. Included playbook works correctly under Ansible v2.1.0.0. Errors are identical running remote or locally. The local playbook works correctly when run on Ubuntu 14.04.4 LTS from the most recent Git checkout. The command also succeeds when targeting an Ubuntu box. Various `copy` and `remote_src` options had no effect. ##### STEPS TO REPRODUCE Run the following playbook on a Mac (all my available test machines are running 10.11.6) ``` --- - hosts: localhost become: no connection: local tasks: - name: download archive get_url: url: https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip # url: https://wordpress.org/latest.zip dest: /tmp/ force: yes register: archive - name: Unpack downloaded archive unarchive: src: '{{ archive.dest }}' dest: /tmp/ # copy: no remote_src: no list_files: yes ``` ##### EXPECTED RESULTS Zip archive is uncompressed ##### ACTUAL RESULTS Module failure. Output is from latest checkout. ``` [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAYBOOK: ansible-local-test.yaml ********************************************** 1 plays in /Users/joe/Desktop/ansible-local-test.yaml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/system/setup.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `"" && echo ansible-tmp-1470935578.63-176735032320360=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmp3C9ALR TO /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/ /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [download archive] ******************************************************** task path: /Users/joe/Desktop/ansible-local-test.yaml:9 Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/network/basics/get_url.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `"" && echo ansible-tmp-1470935579.12-83816721022550=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmprUh8qy TO /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/ /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/"" > /dev/null 2>&1 && sleep 0' changed: [localhost] => { ""changed"": true, ""checksum_dest"": null, ""checksum_src"": ""c06bd7b6da48f8e1bafb5f09d3058f39ee77b5d8"", ""dest"": ""/tmp/basic-wordpress-vagrant-master.zip"", ""gid"": 0, ""group"": ""wheel"", ""invocation"": { ""module_args"": { ""backup"": false, ""checksum"": """", ""content"": null, ""delimiter"": null, ""dest"": ""/tmp/"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""force_basic_auth"": false, ""group"": null, ""headers"": null, ""http_agent"": ""ansible-httpget"", ""mode"": null, ""owner"": null, ""path"": ""/tmp/basic-wordpress-vagrant-master.zip"", ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""sha256sum"": """", ""src"": null, ""timeout"": 10, ""tmp_dest"": """", ""unsafe_writes"": null, ""url"": ""https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip"", ""url_password"": null, ""url_username"": null, ""use_proxy"": true, ""validate_certs"": true }, ""module_name"": ""get_url"" }, ""md5sum"": ""76e57a9d81852954b3cb1ae66c2649c7"", ""mode"": ""0644"", ""msg"": ""OK (14631 bytes)"", ""owner"": ""joe"", ""size"": 14631, ""src"": ""/var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpNVd4BN"", ""state"": ""file"", ""uid"": 502, ""url"": ""https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip"" } TASK [Unpack downloaded archive] *********************************************** task path: /Users/joe/Desktop/ansible-local-test.yaml:23 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `"" && echo ansible-tmp-1470935580.87-202761726912153=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `"" ) && sleep 0' Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/stat.py <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `"" && echo ansible-tmp-1470935580.99-37733200780272=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpnwez_h TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/"" > /dev/null 2>&1 && sleep 0' <127.0.0.1> PUT /private/tmp/basic-wordpress-vagrant-master.zip TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source && sleep 0' Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/unarchive.py <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `"" && echo ansible-tmp-1470935581.39-252312604650389=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpGmuhUh TO /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/ /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/"" > /dev/null 2>&1 && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""dest"": ""/tmp/"", ""failed"": true, ""gid"": 0, ""group"": ""wheel"", ""handler"": ""TgzArchive"", ""invocation"": { ""module_args"": { ""backup"": null, ""content"": null, ""copy"": true, ""creates"": null, ""delimiter"": null, ""dest"": ""/tmp/"", ""directory_mode"": null, ""exclude"": [], ""extra_opts"": [], ""follow"": false, ""force"": null, ""group"": null, ""keep_newer"": false, ""list_files"": true, ""mode"": null, ""original_basename"": ""basic-wordpress-vagrant-master.zip"", ""owner"": null, ""regexp"": null, ""remote_src"": false, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source"", ""unsafe_writes"": null, ""validate_certs"": true } }, ""mode"": ""01777"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/tmp/basic-wordpress-vagrant-master/'"", ""owner"": ""root"", ""size"": 1428, ""src"": ""/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source"", ""state"": ""directory"", ""uid"": 0 } NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/Users/joe/Desktop/ansible-local-test.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=1 ``` ",True,"Unarchive fails on MacOS: ""Unexpected error when accessing exploded file: [Errno 2]"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Unarchive ##### ANSIBLE VERSION Tested with three versions (devel is using up-to-date checkout of modules-core) ``` ansible 2.2.0 (devel 947877dcce) last updated 2016/08/11 11:34:15 (GMT -400) lib/ansible/modules/core: (devel 23ebb98570) last updated 2016/08/11 11:46:27 (GMT -400) lib/ansible/modules/extras: (detached HEAD 39153ea154) last updated 2016/08/10 23:59:44 (GMT -400) config file = configured module search path = Default w/o overrides ``` Also: ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` *Works correctly with:* ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION default config ##### OS / ENVIRONMENT Mac OS X 10.11.6 ##### SUMMARY This seems to be Mac specific. (tested on/against 10.11.6) Unarchive fails on Mac OS X when playbook is run locally or when targeting a remote Mac. Included playbook works correctly under Ansible v2.1.0.0. Errors are identical running remote or locally. The local playbook works correctly when run on Ubuntu 14.04.4 LTS from the most recent Git checkout. The command also succeeds when targeting an Ubuntu box. Various `copy` and `remote_src` options had no effect. ##### STEPS TO REPRODUCE Run the following playbook on a Mac (all my available test machines are running 10.11.6) ``` --- - hosts: localhost become: no connection: local tasks: - name: download archive get_url: url: https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip # url: https://wordpress.org/latest.zip dest: /tmp/ force: yes register: archive - name: Unpack downloaded archive unarchive: src: '{{ archive.dest }}' dest: /tmp/ # copy: no remote_src: no list_files: yes ``` ##### EXPECTED RESULTS Zip archive is uncompressed ##### ACTUAL RESULTS Module failure. Output is from latest checkout. ``` [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAYBOOK: ansible-local-test.yaml ********************************************** 1 plays in /Users/joe/Desktop/ansible-local-test.yaml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/system/setup.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `"" && echo ansible-tmp-1470935578.63-176735032320360=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmp3C9ALR TO /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/ /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [download archive] ******************************************************** task path: /Users/joe/Desktop/ansible-local-test.yaml:9 Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/network/basics/get_url.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `"" && echo ansible-tmp-1470935579.12-83816721022550=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmprUh8qy TO /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/ /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/"" > /dev/null 2>&1 && sleep 0' changed: [localhost] => { ""changed"": true, ""checksum_dest"": null, ""checksum_src"": ""c06bd7b6da48f8e1bafb5f09d3058f39ee77b5d8"", ""dest"": ""/tmp/basic-wordpress-vagrant-master.zip"", ""gid"": 0, ""group"": ""wheel"", ""invocation"": { ""module_args"": { ""backup"": false, ""checksum"": """", ""content"": null, ""delimiter"": null, ""dest"": ""/tmp/"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""force_basic_auth"": false, ""group"": null, ""headers"": null, ""http_agent"": ""ansible-httpget"", ""mode"": null, ""owner"": null, ""path"": ""/tmp/basic-wordpress-vagrant-master.zip"", ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""sha256sum"": """", ""src"": null, ""timeout"": 10, ""tmp_dest"": """", ""unsafe_writes"": null, ""url"": ""https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip"", ""url_password"": null, ""url_username"": null, ""use_proxy"": true, ""validate_certs"": true }, ""module_name"": ""get_url"" }, ""md5sum"": ""76e57a9d81852954b3cb1ae66c2649c7"", ""mode"": ""0644"", ""msg"": ""OK (14631 bytes)"", ""owner"": ""joe"", ""size"": 14631, ""src"": ""/var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpNVd4BN"", ""state"": ""file"", ""uid"": 502, ""url"": ""https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip"" } TASK [Unpack downloaded archive] *********************************************** task path: /Users/joe/Desktop/ansible-local-test.yaml:23 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `"" && echo ansible-tmp-1470935580.87-202761726912153=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `"" ) && sleep 0' Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/stat.py <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `"" && echo ansible-tmp-1470935580.99-37733200780272=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpnwez_h TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/"" > /dev/null 2>&1 && sleep 0' <127.0.0.1> PUT /private/tmp/basic-wordpress-vagrant-master.zip TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source && sleep 0' Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/unarchive.py <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `"" && echo ansible-tmp-1470935581.39-252312604650389=""` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpGmuhUh TO /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/ /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py; rm -rf ""/Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/"" > /dev/null 2>&1 && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""dest"": ""/tmp/"", ""failed"": true, ""gid"": 0, ""group"": ""wheel"", ""handler"": ""TgzArchive"", ""invocation"": { ""module_args"": { ""backup"": null, ""content"": null, ""copy"": true, ""creates"": null, ""delimiter"": null, ""dest"": ""/tmp/"", ""directory_mode"": null, ""exclude"": [], ""extra_opts"": [], ""follow"": false, ""force"": null, ""group"": null, ""keep_newer"": false, ""list_files"": true, ""mode"": null, ""original_basename"": ""basic-wordpress-vagrant-master.zip"", ""owner"": null, ""regexp"": null, ""remote_src"": false, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source"", ""unsafe_writes"": null, ""validate_certs"": true } }, ""mode"": ""01777"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/tmp/basic-wordpress-vagrant-master/'"", ""owner"": ""root"", ""size"": 1428, ""src"": ""/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source"", ""state"": ""directory"", ""uid"": 0 } NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/Users/joe/Desktop/ansible-local-test.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=1 ``` ",1,unarchive fails on macos unexpected error when accessing exploded file issue type bug report component name unarchive ansible version tested with three versions devel is using up to date checkout of modules core ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides also ansible config file configured module search path default w o overrides works correctly with ansible config file configured module search path default w o overrides configuration default config os environment mac os x summary this seems to be mac specific tested on against unarchive fails on mac os x when playbook is run locally or when targeting a remote mac included playbook works correctly under ansible errors are identical running remote or locally the local playbook works correctly when run on ubuntu lts from the most recent git checkout the command also succeeds when targeting an ubuntu box various copy and remote src options had no effect steps to reproduce run the following playbook on a mac all my available test machines are running hosts localhost become no connection local tasks name download archive get url url url dest tmp force yes register archive name unpack downloaded archive unarchive src archive dest dest tmp copy no remote src no list files yes expected results zip archive is uncompressed actual results module failure output is from latest checkout host file not found etc ansible hosts provided hosts list is empty only localhost is available playbook ansible local test yaml plays in users joe desktop ansible local test yaml play task using module file users joe ansible dev lib ansible modules core system setup py establish local connection for user joe exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t to users joe ansible tmp ansible tmp setup py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp setup py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp setup py rm rf users joe ansible tmp ansible tmp dev null sleep ok task task path users joe desktop ansible local test yaml using module file users joe ansible dev lib ansible modules core network basics get url py establish local connection for user joe exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t to users joe ansible tmp ansible tmp get url py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp get url py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp get url py rm rf users joe ansible tmp ansible tmp dev null sleep changed changed true checksum dest null checksum src dest tmp basic wordpress vagrant master zip gid group wheel invocation module args backup false checksum content null delimiter null dest tmp directory mode null follow false force true force basic auth false group null headers null http agent ansible httpget mode null owner null path tmp basic wordpress vagrant master zip regexp null remote src null selevel null serole null setype null seuser null src null timeout tmp dest unsafe writes null url url password null url username null use proxy true validate certs true module name get url mode msg ok bytes owner joe size src var folders t state file uid url task task path users joe desktop ansible local test yaml establish local connection for user joe exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep using module file users joe ansible dev lib ansible modules core files stat py exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpnwez h to users joe ansible tmp ansible tmp stat py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp stat py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp stat py rm rf users joe ansible tmp ansible tmp dev null sleep put private tmp basic wordpress vagrant master zip to users joe ansible tmp ansible tmp source exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp source sleep using module file users joe ansible dev lib ansible modules core files unarchive py exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpgmuhuh to users joe ansible tmp ansible tmp unarchive py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp unarchive py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp unarchive py rm rf users joe ansible tmp ansible tmp dev null sleep exec bin sh c rm f r users joe ansible tmp ansible tmp dev null sleep fatal failed changed false dest tmp failed true gid group wheel handler tgzarchive invocation module args backup null content null copy true creates null delimiter null dest tmp directory mode null exclude extra opts follow false force null group null keep newer false list files true mode null original basename basic wordpress vagrant master zip owner null regexp null remote src false selevel null serole null setype null seuser null src users joe ansible tmp ansible tmp source unsafe writes null validate certs true mode msg unexpected error when accessing exploded file no such file or directory tmp basic wordpress vagrant master owner root size src users joe ansible tmp ansible tmp source state directory uid no more hosts left to retry use limit users joe desktop ansible local test retry play recap localhost ok changed unreachable failed ,1 1857,6577407463.0,IssuesEvent,2017-09-12 00:42:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"os_router: All interfaces get detached, then re-attached on router update",affects_2.0 bug_report cloud openstack waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY On a router update, all the of the interfaces are detached, then the new set are attached. This causes issues with network stability and requires running expensive api calls for each port. It also causes issues with environments running the l3 ha keepalived vrrp driver. Ports would be detached then attached so fast that the keepalived driver couldn't keep up. This would cause the l3 agent to hang, rendering all l3 services to be unavailable. ##### STEPS TO REPRODUCE 1. Create a router with internal interfaces using the os_router.py module. 2. Update the internal interfaces list by adding and/or deleting interfaces, or making any other change to it's configurations 3. Re-run the playbook. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L327-L334 ##### EXPECTED RESULTS All of the internal router interfaces will be detached from the router, then the new set will be attached. ",True,"os_router: All interfaces get detached, then re-attached on router update - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY On a router update, all the of the interfaces are detached, then the new set are attached. This causes issues with network stability and requires running expensive api calls for each port. It also causes issues with environments running the l3 ha keepalived vrrp driver. Ports would be detached then attached so fast that the keepalived driver couldn't keep up. This would cause the l3 agent to hang, rendering all l3 services to be unavailable. ##### STEPS TO REPRODUCE 1. Create a router with internal interfaces using the os_router.py module. 2. Update the internal interfaces list by adding and/or deleting interfaces, or making any other change to it's configurations 3. Re-run the playbook. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L327-L334 ##### EXPECTED RESULTS All of the internal router interfaces will be detached from the router, then the new set will be attached. ",1,os router all interfaces get detached then re attached on router update issue type bug report component name os router py ansible version ansible os environment na summary on a router update all the of the interfaces are detached then the new set are attached this causes issues with network stability and requires running expensive api calls for each port it also causes issues with environments running the ha keepalived vrrp driver ports would be detached then attached so fast that the keepalived driver couldn t keep up this would cause the agent to hang rendering all services to be unavailable steps to reproduce create a router with internal interfaces using the os router py module update the internal interfaces list by adding and or deleting interfaces or making any other change to it s configurations re run the playbook see expected results all of the internal router interfaces will be detached from the router then the new set will be attached ,1 799,4417051180.0,IssuesEvent,2016-08-15 01:43:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,replace module only replaces last match,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME replace ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3c65c03a67) last updated 2016/08/14 16:26:39 (GMT -500) lib/ansible/modules/core: (detached HEAD decb2ec9fa) last updated 2016/08/14 16:27:00 (GMT -500) lib/ansible/modules/extras: (detached HEAD 61d5fe148c) last updated 2016/08/14 16:27:13 (GMT -500) config file = /home/nipsy/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] hostfile = ~/.ansible/hosts host_key_checking = False [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=8h -o Compression=yes control_path = /dev/shm/%%r@%%h:%%p scp_if_ssh=True ``` ##### OS / ENVIRONMENT Debian/testing (packaged ansible (currently 2.1.1.0) behaves the same as most recent dev branch) ##### SUMMARY The replace module is only changing the last line matched by the supplied regex, instead of all matches. ##### STEPS TO REPRODUCE Create *test.list* ``` deb http://us.archive.ubuntu.com/ubuntu lucid main restricted deb-src http://us.archive.ubuntu.com/ubuntu lucid main restricted ``` Create *test.yaml* ``` --- - hosts: all tasks: # fix outdated Ubuntu repos - name: fix outdated Ubuntu repos replace: dest=/home/nipsy/tmp/test.list regexp='^([^#]+)us\.archive\.ubuntu\.com(.*)$' replace='\1old-releases.ubuntu.com\2' backup=yes ``` Run: ``` $ ansible-playbook -vi localhost, test.yaml Using /home/nipsy/.ansible.cfg as config file PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [localhost] TASK [fix outdated Ubuntu repos] *********************************************** changed: [localhost] => {""changed"": true, ""msg"": ""1 replacements made""} PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` ##### EXPECTED RESULTS ``` $ cat test.list deb http://old-releases.ubuntu.com/ubuntu lucid main restricted deb-src http://old-releases.ubuntu.com/ubuntu lucid main restricted ``` ##### ACTUAL RESULTS ``` $ cat test.list deb http://us.archive.ubuntu.com/ubuntu lucid main restricted deb-src http://old-releases.ubuntu.com/ubuntu lucid main restricted ```",True,"replace module only replaces last match - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME replace ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3c65c03a67) last updated 2016/08/14 16:26:39 (GMT -500) lib/ansible/modules/core: (detached HEAD decb2ec9fa) last updated 2016/08/14 16:27:00 (GMT -500) lib/ansible/modules/extras: (detached HEAD 61d5fe148c) last updated 2016/08/14 16:27:13 (GMT -500) config file = /home/nipsy/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] hostfile = ~/.ansible/hosts host_key_checking = False [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=8h -o Compression=yes control_path = /dev/shm/%%r@%%h:%%p scp_if_ssh=True ``` ##### OS / ENVIRONMENT Debian/testing (packaged ansible (currently 2.1.1.0) behaves the same as most recent dev branch) ##### SUMMARY The replace module is only changing the last line matched by the supplied regex, instead of all matches. ##### STEPS TO REPRODUCE Create *test.list* ``` deb http://us.archive.ubuntu.com/ubuntu lucid main restricted deb-src http://us.archive.ubuntu.com/ubuntu lucid main restricted ``` Create *test.yaml* ``` --- - hosts: all tasks: # fix outdated Ubuntu repos - name: fix outdated Ubuntu repos replace: dest=/home/nipsy/tmp/test.list regexp='^([^#]+)us\.archive\.ubuntu\.com(.*)$' replace='\1old-releases.ubuntu.com\2' backup=yes ``` Run: ``` $ ansible-playbook -vi localhost, test.yaml Using /home/nipsy/.ansible.cfg as config file PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [localhost] TASK [fix outdated Ubuntu repos] *********************************************** changed: [localhost] => {""changed"": true, ""msg"": ""1 replacements made""} PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` ##### EXPECTED RESULTS ``` $ cat test.list deb http://old-releases.ubuntu.com/ubuntu lucid main restricted deb-src http://old-releases.ubuntu.com/ubuntu lucid main restricted ``` ##### ACTUAL RESULTS ``` $ cat test.list deb http://us.archive.ubuntu.com/ubuntu lucid main restricted deb-src http://old-releases.ubuntu.com/ubuntu lucid main restricted ```",1,replace module only replaces last match issue type bug report component name replace ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home nipsy ansible cfg configured module search path default w o overrides configuration hostfile ansible hosts host key checking false ssh args o controlmaster auto o controlpersist o compression yes control path dev shm r h p scp if ssh true os environment debian testing packaged ansible currently behaves the same as most recent dev branch summary the replace module is only changing the last line matched by the supplied regex instead of all matches steps to reproduce create test list deb lucid main restricted deb src lucid main restricted create test yaml hosts all tasks fix outdated ubuntu repos name fix outdated ubuntu repos replace dest home nipsy tmp test list regexp us archive ubuntu com replace releases ubuntu com backup yes run ansible playbook vi localhost test yaml using home nipsy ansible cfg as config file play task ok task changed changed true msg replacements made play recap localhost ok changed unreachable failed expected results cat test list deb lucid main restricted deb src lucid main restricted actual results cat test list deb lucid main restricted deb src lucid main restricted ,1 1837,6577368886.0,IssuesEvent,2017-09-12 00:25:35,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Docker module: argument memory_limit is of type and we were unable to convert to int"" on Ansible 2.0.2.0-1.el7",affects_2.0 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### OS / ENVIRONMENT Centos 7 ##### SUMMARY After upgrade to ansible 2.0.2.0 it's not possible to enter memory_limit as human readable string (ie. 265MB) only bytes are accepted. ##### STEPS TO REPRODUCE try set memory_limit: 256MB in docker container task ``` - name: sphinx container docker: name: sphinx image: michalzubkowicz/docker-sphinxsearch state: started restart_policy: always memory_limit: 256MB ``` ##### EXPECTED RESULTS Should accept string as in earlier versions ##### ACTUAL RESULTS Is showing error ``` argument memory_limit is of type and we were unable to convert to int ``` ",True,"Docker module: argument memory_limit is of type and we were unable to convert to int"" on Ansible 2.0.2.0-1.el7 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### OS / ENVIRONMENT Centos 7 ##### SUMMARY After upgrade to ansible 2.0.2.0 it's not possible to enter memory_limit as human readable string (ie. 265MB) only bytes are accepted. ##### STEPS TO REPRODUCE try set memory_limit: 256MB in docker container task ``` - name: sphinx container docker: name: sphinx image: michalzubkowicz/docker-sphinxsearch state: started restart_policy: always memory_limit: 256MB ``` ##### EXPECTED RESULTS Should accept string as in earlier versions ##### ACTUAL RESULTS Is showing error ``` argument memory_limit is of type and we were unable to convert to int ``` ",1,docker module argument memory limit is of type and we were unable to convert to int on ansible issue type bug report component name docker ansible version ansible os environment centos summary after upgrade to ansible it s not possible to enter memory limit as human readable string ie only bytes are accepted steps to reproduce try set memory limit in docker container task name sphinx container docker name sphinx image michalzubkowicz docker sphinxsearch state started restart policy always memory limit expected results should accept string as in earlier versions actual results is showing error argument memory limit is of type and we were unable to convert to int ,1 1214,5194607873.0,IssuesEvent,2017-01-23 04:58:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Git ability to clean untracked and ignored files,affects_2.3 feature_idea waiting_on_maintainer,"Git module should have an option to run `git clean -f` to remove untracked files. This is useful to say build a project from a pristine repository. Currently all untracked files remain in the directory. I think there should be two options : - `clean_untracked` - remove files and directories - `clean_ignored` - remove ignored flies If this sounds good, I can send a PR ",True,"Git ability to clean untracked and ignored files - Git module should have an option to run `git clean -f` to remove untracked files. This is useful to say build a project from a pristine repository. Currently all untracked files remain in the directory. I think there should be two options : - `clean_untracked` - remove files and directories - `clean_ignored` - remove ignored flies If this sounds good, I can send a PR ",1,git ability to clean untracked and ignored files git module should have an option to run git clean f to remove untracked files this is useful to say build a project from a pristine repository currently all untracked files remain in the directory i think there should be two options clean untracked remove files and directories clean ignored remove ignored flies if this sounds good i can send a pr ,1 1154,5037475876.0,IssuesEvent,2016-12-17 17:51:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,package module: Add update_cache option,affects_2.1 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME pacakge ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Fedora ##### SUMMARY The package module should have the ability to update the package system cache ",True,"package module: Add update_cache option - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME pacakge ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Fedora ##### SUMMARY The package module should have the ability to update the package system cache ",1,package module add update cache option issue type feature idea component name pacakge ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment fedora summary the package module should have the ability to update the package system cache ,1 1302,5542078195.0,IssuesEvent,2017-03-22 14:20:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Volumes do not get dropped when removing the docker container,affects_2.0 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ``` [defaults] host_key_checking = False forks = 10 var_compression_level=9 retries=5 fact_caching = jsonfile fact_caching_connection = ~/.ansible/tmp ``` ##### OS / ENVIRONMENT OS From: OS X, Ubuntu 14.04 OS To: Ubuntu 14.04 ##### SUMMARY Volumes do not get dropped when removing the docker container ##### STEPS TO REPRODUCE 1. Create the docker container with attached volumes 2. fill the volume 3. drop the container 4. repeat several times 5. `docker volume ls -f dangling=true` will show you the not-used containers See [Docker manual](https://docs.docker.com/v1.10/engine/userguide/containers/dockervolumes/): > Note: Docker will not warn you when removing a container without providing the `-v` option to delete its volumes. If you remove containers without using the `-v` option, you may end up with “dangling” volumes; volumes that are no longer referenced by a container. You can use `docker volume ls -f dangling=true` to find dangling volumes, and use `docker volume rm ` to remove a volume that’s no longer needed. I spin up the container with something like this: ``` - name: Spinning up an {{ persistcomponent }}-platform container docker: registry: ""{{ docker_registry_url }}"" email: ""{{ docker_registry_email }}"" username: ""{{ docker_registry_user }}"" password: ""{{ docker_registry_password }}"" docker_api_version: ""{{ docker_api_version }}"" image: ""{{ docker_registry_url }}/arena/{{ docker_images[persistcomponent] }}:{{ builds['platform'][persistcomponent] }}"" pull: always state: reloaded restart_policy: always restart_policy_retry: 5 name: ""{{ persistcomponent }}-platform"" net: bridge volumes: - /srv/gsn/arenasettings.json:/srv/gsn/arenasettings.json:ro log_driver: fluentd log_opt: fluentd-address: 127.0.0.1:24224 fluentd-tag: ""docker.{{ '{{' }}.Name{{ '}}' }}"" fluentd-async-connect: ""true"" #log_driver: syslog #log_opt: # syslog-address: udp://127.0.0.1:15140 # syslog-tag: ""docker.{{ '{{' }}.Name{{ '}}' }}"" env: ARENA_COMPONENT: ""{{ persistcomponent }}"" tags: - ""{{ persistcomponent }}_platform_spinup"" - platform_spinup - ""{{ persistcomponent }}_platform"" when: builds['platform'][persistcomponent] not in [ 'Keep', 'None'] notify: - ""drop {{ persistcomponent }}-nginx container"" - ""restart {{ persistcomponent }}-nginx container"" ``` The container has `VOLUME` in its Dockerfile which we mount to nginx container (we put static files there, collected with Django's `manage.py collectstatic` ##### EXPECTED RESULTS not to have those dangling volumes ##### ACTUAL RESULTS we have dangling volumes when we renew the container ",True,"Volumes do not get dropped when removing the docker container - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME _docker module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ``` [defaults] host_key_checking = False forks = 10 var_compression_level=9 retries=5 fact_caching = jsonfile fact_caching_connection = ~/.ansible/tmp ``` ##### OS / ENVIRONMENT OS From: OS X, Ubuntu 14.04 OS To: Ubuntu 14.04 ##### SUMMARY Volumes do not get dropped when removing the docker container ##### STEPS TO REPRODUCE 1. Create the docker container with attached volumes 2. fill the volume 3. drop the container 4. repeat several times 5. `docker volume ls -f dangling=true` will show you the not-used containers See [Docker manual](https://docs.docker.com/v1.10/engine/userguide/containers/dockervolumes/): > Note: Docker will not warn you when removing a container without providing the `-v` option to delete its volumes. If you remove containers without using the `-v` option, you may end up with “dangling” volumes; volumes that are no longer referenced by a container. You can use `docker volume ls -f dangling=true` to find dangling volumes, and use `docker volume rm ` to remove a volume that’s no longer needed. I spin up the container with something like this: ``` - name: Spinning up an {{ persistcomponent }}-platform container docker: registry: ""{{ docker_registry_url }}"" email: ""{{ docker_registry_email }}"" username: ""{{ docker_registry_user }}"" password: ""{{ docker_registry_password }}"" docker_api_version: ""{{ docker_api_version }}"" image: ""{{ docker_registry_url }}/arena/{{ docker_images[persistcomponent] }}:{{ builds['platform'][persistcomponent] }}"" pull: always state: reloaded restart_policy: always restart_policy_retry: 5 name: ""{{ persistcomponent }}-platform"" net: bridge volumes: - /srv/gsn/arenasettings.json:/srv/gsn/arenasettings.json:ro log_driver: fluentd log_opt: fluentd-address: 127.0.0.1:24224 fluentd-tag: ""docker.{{ '{{' }}.Name{{ '}}' }}"" fluentd-async-connect: ""true"" #log_driver: syslog #log_opt: # syslog-address: udp://127.0.0.1:15140 # syslog-tag: ""docker.{{ '{{' }}.Name{{ '}}' }}"" env: ARENA_COMPONENT: ""{{ persistcomponent }}"" tags: - ""{{ persistcomponent }}_platform_spinup"" - platform_spinup - ""{{ persistcomponent }}_platform"" when: builds['platform'][persistcomponent] not in [ 'Keep', 'None'] notify: - ""drop {{ persistcomponent }}-nginx container"" - ""restart {{ persistcomponent }}-nginx container"" ``` The container has `VOLUME` in its Dockerfile which we mount to nginx container (we put static files there, collected with Django's `manage.py collectstatic` ##### EXPECTED RESULTS not to have those dangling volumes ##### ACTUAL RESULTS we have dangling volumes when we renew the container ",1,volumes do not get dropped when removing the docker container issue type bug report component name docker module ansible version ansible configuration host key checking false forks var compression level retries fact caching jsonfile fact caching connection ansible tmp os environment os from os x ubuntu os to ubuntu summary volumes do not get dropped when removing the docker container steps to reproduce create the docker container with attached volumes fill the volume drop the container repeat several times docker volume ls f dangling true will show you the not used containers see note docker will not warn you when removing a container without providing the v option to delete its volumes if you remove containers without using the v option you may end up with “dangling” volumes volumes that are no longer referenced by a container you can use docker volume ls f dangling true to find dangling volumes and use docker volume rm to remove a volume that’s no longer needed i spin up the container with something like this name spinning up an persistcomponent platform container docker registry docker registry url email docker registry email username docker registry user password docker registry password docker api version docker api version image docker registry url arena docker images builds pull always state reloaded restart policy always restart policy retry name persistcomponent platform net bridge volumes srv gsn arenasettings json srv gsn arenasettings json ro log driver fluentd log opt fluentd address fluentd tag docker name fluentd async connect true log driver syslog log opt syslog address udp syslog tag docker name env arena component persistcomponent tags persistcomponent platform spinup platform spinup persistcomponent platform when builds not in notify drop persistcomponent nginx container restart persistcomponent nginx container the container has volume in its dockerfile which we mount to nginx container we put static files there collected with django s manage py collectstatic expected results not to have those dangling volumes actual results we have dangling volumes when we renew the container ,1 1822,6577329897.0,IssuesEvent,2017-09-12 00:09:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,IAM user can not go from N to 0 groups.,affects_2.0 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/iam ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/sbrady/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat ~/.ansible.cfg [defaults] nocows=1 [ssh_connection] pipelining = True ``` ##### OS / ENVIRONMENT Linux but issue should not be platform specific. ##### SUMMARY When trying to change group membership of a user from one or more groups, to no groups, no groups are changed. ##### STEPS TO REPRODUCE ``` iam: iam_type: ""user"" name: ""joe"" groups: [] ``` ##### EXPECTED RESULTS Expected ""joe"" to no longer be in the ""foo"" group. ##### ACTUAL RESULTS ""joe"" remained in the ""foo"" group. Examining the code, I see the issue. https://github.com/ansible/ansible-modules-core/blob/a8e5f27b2c27eabc3a9fff9c3719da6ea1fb489d/cloud/amazon/iam.py#L683 The module uses `if groups:`, where groups is a list. Any empty list (""I want this user to be in no groups"") will evaluate to `False`, and therefore the block will not execute. I believe the author meant to check if the parameter had been passed at all. Please advise if I am mis-using the module, or can provide more information. Thanks. ",True,"IAM user can not go from N to 0 groups. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/iam ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/sbrady/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat ~/.ansible.cfg [defaults] nocows=1 [ssh_connection] pipelining = True ``` ##### OS / ENVIRONMENT Linux but issue should not be platform specific. ##### SUMMARY When trying to change group membership of a user from one or more groups, to no groups, no groups are changed. ##### STEPS TO REPRODUCE ``` iam: iam_type: ""user"" name: ""joe"" groups: [] ``` ##### EXPECTED RESULTS Expected ""joe"" to no longer be in the ""foo"" group. ##### ACTUAL RESULTS ""joe"" remained in the ""foo"" group. Examining the code, I see the issue. https://github.com/ansible/ansible-modules-core/blob/a8e5f27b2c27eabc3a9fff9c3719da6ea1fb489d/cloud/amazon/iam.py#L683 The module uses `if groups:`, where groups is a list. Any empty list (""I want this user to be in no groups"") will evaluate to `False`, and therefore the block will not execute. I believe the author meant to check if the parameter had been passed at all. Please advise if I am mis-using the module, or can provide more information. Thanks. ",1,iam user can not go from n to groups issue type bug report component name cloud amazon iam ansible version ansible config file home sbrady ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables cat ansible cfg nocows pipelining true os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux but issue should not be platform specific summary when trying to change group membership of a user from one or more groups to no groups no groups are changed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used iam iam type user name joe groups expected results expected joe to no longer be in the foo group actual results joe remained in the foo group examining the code i see the issue the module uses if groups where groups is a list any empty list i want this user to be in no groups will evaluate to false and therefore the block will not execute i believe the author meant to check if the parameter had been passed at all please advise if i am mis using the module or can provide more information thanks ,1 1140,4998881477.0,IssuesEvent,2016-12-09 21:20:37,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,AWS modules should account for API throttling,affects_2.3 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc (Applies to _any_ AWS module!) ##### ANSIBLE VERSION Any Ansible version ##### OS / ENVIRONMENT N/A ##### SUMMARY It seems that Ansible (or ""Boto"" at the bottom layer) doesn't account for Query API Request Rate throttling Amazon enforces. If you do frequent AWS API calls (like I do at the moment, as I frequently create and destroy a very complex environment consisting of many components, because this environment is still in development) this throttling can kick in, give a negative reply to your Ansible task, and the Playbook aborts at that point. Countermeasures: - [API retries](https://docs.aws.amazon.com/general/latest/gr/api-retries.html) - [Backoff background info](https://www.awsarchitectureblog.com/2015/03/backoff.html) - [Retry throttling](https://aws.amazon.com/blogs/developer/introducing-retry-throttling/) ##### STEPS TO REPRODUCE Run a Playbook that does many API calls, like create a VPC, many subnets inside the VPC, Security Groups, ELBs, Internet Gateways, NAT Gateways, Route Tables, EC2 instances, etc. Then destroy them with your ""destroy"" Playbook. Re-run ""create"" Playbook. Destroy. Then you're likely to see this rate limiting. ##### EXPECTED RESULTS AWS modules should not fail when the rate limiting is in effect, but should retry until the call succeeds. ##### ACTUAL RESULTS AWS modules fail when the rate limiting is in effect. This manifests as follows: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""An error occurred (RequestLimitExceeded) when calling the DescribeAddresses operation: Request limit exceeded."", ""success"": false} ``` ",True,"AWS modules should account for API throttling - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc (Applies to _any_ AWS module!) ##### ANSIBLE VERSION Any Ansible version ##### OS / ENVIRONMENT N/A ##### SUMMARY It seems that Ansible (or ""Boto"" at the bottom layer) doesn't account for Query API Request Rate throttling Amazon enforces. If you do frequent AWS API calls (like I do at the moment, as I frequently create and destroy a very complex environment consisting of many components, because this environment is still in development) this throttling can kick in, give a negative reply to your Ansible task, and the Playbook aborts at that point. Countermeasures: - [API retries](https://docs.aws.amazon.com/general/latest/gr/api-retries.html) - [Backoff background info](https://www.awsarchitectureblog.com/2015/03/backoff.html) - [Retry throttling](https://aws.amazon.com/blogs/developer/introducing-retry-throttling/) ##### STEPS TO REPRODUCE Run a Playbook that does many API calls, like create a VPC, many subnets inside the VPC, Security Groups, ELBs, Internet Gateways, NAT Gateways, Route Tables, EC2 instances, etc. Then destroy them with your ""destroy"" Playbook. Re-run ""create"" Playbook. Destroy. Then you're likely to see this rate limiting. ##### EXPECTED RESULTS AWS modules should not fail when the rate limiting is in effect, but should retry until the call succeeds. ##### ACTUAL RESULTS AWS modules fail when the rate limiting is in effect. This manifests as follows: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""An error occurred (RequestLimitExceeded) when calling the DescribeAddresses operation: Request limit exceeded."", ""success"": false} ``` ",1,aws modules should account for api throttling issue type bug report component name vpc applies to any aws module ansible version any ansible version os environment n a summary it seems that ansible or boto at the bottom layer doesn t account for query api request rate throttling amazon enforces if you do frequent aws api calls like i do at the moment as i frequently create and destroy a very complex environment consisting of many components because this environment is still in development this throttling can kick in give a negative reply to your ansible task and the playbook aborts at that point countermeasures steps to reproduce run a playbook that does many api calls like create a vpc many subnets inside the vpc security groups elbs internet gateways nat gateways route tables instances etc then destroy them with your destroy playbook re run create playbook destroy then you re likely to see this rate limiting expected results aws modules should not fail when the rate limiting is in effect but should retry until the call succeeds actual results aws modules fail when the rate limiting is in effect this manifests as follows fatal failed changed false failed true msg an error occurred requestlimitexceeded when calling the describeaddresses operation request limit exceeded success false ,1 1786,6575879768.0,IssuesEvent,2017-09-11 17:41:12,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Adding Namespace as an editable parameter for the docker_login module.,affects_2.1 cloud docker feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ansible-modules-core/cloud/docker/docker_login.py ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I am trying to log into my organization using the docker_login module. Docker Hub and Cloud now both have Organizations which improve your ability to control who can create, edit or delete Docker Hub repositories. ##### STEPS TO REPRODUCE ``` - name: Log into DockerHub docker_login: username: docker password: rekcod email: docker@docker.io namespace: docker_organization ``` ##### EXPECTED RESULTS I would expect that Docker now logs into my personal account but uses the organization that I am linked to and not my own personal account for pushing images or doing any tasks Docker related on the machine I used the docker_login module. This way I can effectively work with multiple teams that have their own repositories and I can effectively deploy Docker Images that are private from different teams. ##### ACTUAL RESULTS This isn't currently possible. ",True,"Adding Namespace as an editable parameter for the docker_login module. - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ansible-modules-core/cloud/docker/docker_login.py ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I am trying to log into my organization using the docker_login module. Docker Hub and Cloud now both have Organizations which improve your ability to control who can create, edit or delete Docker Hub repositories. ##### STEPS TO REPRODUCE ``` - name: Log into DockerHub docker_login: username: docker password: rekcod email: docker@docker.io namespace: docker_organization ``` ##### EXPECTED RESULTS I would expect that Docker now logs into my personal account but uses the organization that I am linked to and not my own personal account for pushing images or doing any tasks Docker related on the machine I used the docker_login module. This way I can effectively work with multiple teams that have their own repositories and I can effectively deploy Docker Images that are private from different teams. ##### ACTUAL RESULTS This isn't currently possible. ",1,adding namespace as an editable parameter for the docker login module issue type feature idea component name ansible modules core cloud docker docker login py ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration currently i m not using the docker login module i am exporting the docker login details via terminal dockercloud user username dockercloud pass password dockercloud namespace organization os environment ubuntu lts summary i am trying to log into my organization using the docker login module docker hub and cloud now both have organizations which improve your ability to control who can create edit or delete docker hub repositories steps to reproduce i would just specify dockercloud namespace in the config file that docker would have by default name log into dockerhub docker login username docker password rekcod email docker docker io namespace docker organization expected results i would expect that docker now logs into my personal account but uses the organization that i am linked to and not my own personal account for pushing images or doing any tasks docker related on the machine i used the docker login module this way i can effectively work with multiple teams that have their own repositories and i can effectively deploy docker images that are private from different teams actual results this isn t currently possible ,1 842,4489167060.0,IssuesEvent,2016-08-30 09:59:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"User module - existing user groups removed, even with append set as yes (OS X)",bug_report waiting_on_maintainer,"##### Issue Type: Bug Report ##### Ansible Version: ``` ansible 1.9.3 configured module search path = None ``` ##### Ansible Configuration: ``` cat: /etc/ansible/ansible.cfg: No such file or directory cat: ~/.ansible.cfg: No such file or directory ``` ##### Environment: OS X 10.11 Beta (15A243d) ##### Summary: I attempted to add a new group **groupexample** to the user that was running the playbook, and added `append: yes` per the [docs](http://docs.ansible.com/ansible/user_module.html), as I wanted to ensure that the existing groups were left intact. The task completed successfully, and the new group **groupexample** was added the the user successfully, but the user was now missing other (very important!) system groups, such as **admin**. ##### Steps To Reproduce: ``` $ cat playbook.yml --- - hosts: all connection: local tasks: - name: ensure groupexample group exists become: yes group: name: groupexample state: present - name: add groupexample group to current user become: yes user: name: ""{{ ansible_user_id }}"" groups: groupexample append: yes ``` ``` $ ansible-playbook --ask-become-pass playbook.yml -i inventory/hosts SUDO password: PLAY [all] ******************************************************************** GATHERING FACTS *************************************************************** ok: [localhost] TASK: [ensure groupexample group exists] ************************************** changed: [localhost] TASK: [add groupexample group to current user] ******************************** changed: [localhost] PLAY RECAP ******************************************************************** localhost : ok=3 changed=2 unreachable=0 failed=0 ``` ##### Expected Results: ``` $ groups staff admin wheel everyone localaccounts groupexample ``` (OS X underscore and com groups removed for brevity) ##### Actual Results: ``` $ groups groupexample everyone localaccounts ``` (OS X underscore and com groups removed for brevity) ##### Other notes For any other poor souls that may also be running a single user machine who then loose the groups that allow them to administer the OS: - Boot into single user mode (power off, hold command+s, power on) - Run `visudo` - Add your username to list of sudoers, for example ``` root ALL=(ALL) ALL %admin ALL=(ALL) ALL yourusernamehere ALL=(ALL) ALL ``` - Save, quit, `reboot` - You can now boot into OS X normally, and re add your lost groups with: ``` sudo dseditgroup -o edit -a yourusernamehere -t user admin sudo dseditgroup -o edit -a yourusernamehere -t user wheel sudo dseditgroup -o edit -a yourusernamehere -t user staff ```",True,"User module - existing user groups removed, even with append set as yes (OS X) - ##### Issue Type: Bug Report ##### Ansible Version: ``` ansible 1.9.3 configured module search path = None ``` ##### Ansible Configuration: ``` cat: /etc/ansible/ansible.cfg: No such file or directory cat: ~/.ansible.cfg: No such file or directory ``` ##### Environment: OS X 10.11 Beta (15A243d) ##### Summary: I attempted to add a new group **groupexample** to the user that was running the playbook, and added `append: yes` per the [docs](http://docs.ansible.com/ansible/user_module.html), as I wanted to ensure that the existing groups were left intact. The task completed successfully, and the new group **groupexample** was added the the user successfully, but the user was now missing other (very important!) system groups, such as **admin**. ##### Steps To Reproduce: ``` $ cat playbook.yml --- - hosts: all connection: local tasks: - name: ensure groupexample group exists become: yes group: name: groupexample state: present - name: add groupexample group to current user become: yes user: name: ""{{ ansible_user_id }}"" groups: groupexample append: yes ``` ``` $ ansible-playbook --ask-become-pass playbook.yml -i inventory/hosts SUDO password: PLAY [all] ******************************************************************** GATHERING FACTS *************************************************************** ok: [localhost] TASK: [ensure groupexample group exists] ************************************** changed: [localhost] TASK: [add groupexample group to current user] ******************************** changed: [localhost] PLAY RECAP ******************************************************************** localhost : ok=3 changed=2 unreachable=0 failed=0 ``` ##### Expected Results: ``` $ groups staff admin wheel everyone localaccounts groupexample ``` (OS X underscore and com groups removed for brevity) ##### Actual Results: ``` $ groups groupexample everyone localaccounts ``` (OS X underscore and com groups removed for brevity) ##### Other notes For any other poor souls that may also be running a single user machine who then loose the groups that allow them to administer the OS: - Boot into single user mode (power off, hold command+s, power on) - Run `visudo` - Add your username to list of sudoers, for example ``` root ALL=(ALL) ALL %admin ALL=(ALL) ALL yourusernamehere ALL=(ALL) ALL ``` - Save, quit, `reboot` - You can now boot into OS X normally, and re add your lost groups with: ``` sudo dseditgroup -o edit -a yourusernamehere -t user admin sudo dseditgroup -o edit -a yourusernamehere -t user wheel sudo dseditgroup -o edit -a yourusernamehere -t user staff ```",1,user module existing user groups removed even with append set as yes os x issue type bug report ansible version ansible configured module search path none ansible configuration cat etc ansible ansible cfg no such file or directory cat ansible cfg no such file or directory environment os x beta summary i attempted to add a new group groupexample to the user that was running the playbook and added append yes per the as i wanted to ensure that the existing groups were left intact the task completed successfully and the new group groupexample was added the the user successfully but the user was now missing other very important system groups such as admin steps to reproduce cat playbook yml hosts all connection local tasks name ensure groupexample group exists become yes group name groupexample state present name add groupexample group to current user become yes user name ansible user id groups groupexample append yes ansible playbook ask become pass playbook yml i inventory hosts sudo password play gathering facts ok task changed task changed play recap localhost ok changed unreachable failed expected results groups staff admin wheel everyone localaccounts groupexample os x underscore and com groups removed for brevity actual results groups groupexample everyone localaccounts os x underscore and com groups removed for brevity other notes for any other poor souls that may also be running a single user machine who then loose the groups that allow them to administer the os boot into single user mode power off hold command s power on run visudo add your username to list of sudoers for example root all all all admin all all all yourusernamehere all all all save quit reboot you can now boot into os x normally and re add your lost groups with sudo dseditgroup o edit a yourusernamehere t user admin sudo dseditgroup o edit a yourusernamehere t user wheel sudo dseditgroup o edit a yourusernamehere t user staff ,1 1180,5096339470.0,IssuesEvent,2017-01-03 17:53:03,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mount module feature request: allow to mount without adding to fstab,affects_2.3 feature_idea waiting_on_maintainer,"Hello, For temporary need, it should be possible to mount a filesystem without adding it to fstab http://docs.ansible.com/ansible/mount_module.html The state values should be updated present absent mounted (just mount) mountedandpresent (mount+fstab) unmounted (unmount) unmountedandpresent (unmount but keep in fstab; ex: with opts noauto - only manual usage) My main use case is using ansible for mass forensics collections and try to minimize as much as possible modifications to the system Thanks ",True,"mount module feature request: allow to mount without adding to fstab - Hello, For temporary need, it should be possible to mount a filesystem without adding it to fstab http://docs.ansible.com/ansible/mount_module.html The state values should be updated present absent mounted (just mount) mountedandpresent (mount+fstab) unmounted (unmount) unmountedandpresent (unmount but keep in fstab; ex: with opts noauto - only manual usage) My main use case is using ansible for mass forensics collections and try to minimize as much as possible modifications to the system Thanks ",1,mount module feature request allow to mount without adding to fstab hello for temporary need it should be possible to mount a filesystem without adding it to fstab the state values should be updated present absent mounted just mount mountedandpresent mount fstab unmounted unmount unmountedandpresent unmount but keep in fstab ex with opts noauto only manual usage my main use case is using ansible for mass forensics collections and try to minimize as much as possible modifications to the system thanks ,1 1656,6574034331.0,IssuesEvent,2017-09-11 11:11:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"lineinfile insertafter=EOF, Replace last line instead of insert after last line",affects_2.3 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report lineinfile insertafter=EOF http://docs.ansible.com/ansible/lineinfile_module.html ansible 2.1.0.0 Ansible Configuration: default CentOS Linux release 7.2.1511 (Core) 3.10.0-327.36.1.el7.x86_64 ##### SUMMARY BUG: The lastline is deleted and replaced, with lineinfile insertafter=EOF Want to add a new string to the end of the file, but the result is that the lastline is deleted and replaced. Replace last line instead of insert after last line. ##### STEPS TO REPRODUCE /etc/crontab */45 * * * * root /sbin/blabla */45 * * * * root /sbin/tratra Playbook section: - name: add at the end of the file /etc/crontab rhn_check lineinfile: dest=/etc/crontab regexp='' insertafter=EOF line='*/60 * * * * root /sbin/rhn_chk' ##### EXPECTED RESULTS Line added to the crontab */45 * * * * root /sbin/blabla */45 * * * * root /sbin/tratra */60 * * * * root /sbin/rhn_chk ##### ACTUAL RESULTS /etc/crontab */45 * * * * root /sbin/blabla */60 * * * * root /sbin/rhn_chk Last line in the file deleted and replaced ",True,"lineinfile insertafter=EOF, Replace last line instead of insert after last line - ##### ISSUE TYPE - Bug Report lineinfile insertafter=EOF http://docs.ansible.com/ansible/lineinfile_module.html ansible 2.1.0.0 Ansible Configuration: default CentOS Linux release 7.2.1511 (Core) 3.10.0-327.36.1.el7.x86_64 ##### SUMMARY BUG: The lastline is deleted and replaced, with lineinfile insertafter=EOF Want to add a new string to the end of the file, but the result is that the lastline is deleted and replaced. Replace last line instead of insert after last line. ##### STEPS TO REPRODUCE /etc/crontab */45 * * * * root /sbin/blabla */45 * * * * root /sbin/tratra Playbook section: - name: add at the end of the file /etc/crontab rhn_check lineinfile: dest=/etc/crontab regexp='' insertafter=EOF line='*/60 * * * * root /sbin/rhn_chk' ##### EXPECTED RESULTS Line added to the crontab */45 * * * * root /sbin/blabla */45 * * * * root /sbin/tratra */60 * * * * root /sbin/rhn_chk ##### ACTUAL RESULTS /etc/crontab */45 * * * * root /sbin/blabla */60 * * * * root /sbin/rhn_chk Last line in the file deleted and replaced ",1,lineinfile insertafter eof replace last line instead of insert after last line issue type bug report lineinfile insertafter eof ansible ansible configuration default centos linux release core summary bug the lastline is deleted and replaced with lineinfile insertafter eof want to add a new string to the end of the file but the result is that the lastline is deleted and replaced replace last line instead of insert after last line steps to reproduce etc crontab root sbin blabla root sbin tratra playbook section name add at the end of the file etc crontab rhn check lineinfile dest etc crontab regexp insertafter eof line root sbin rhn chk expected results line added to the crontab root sbin blabla root sbin tratra root sbin rhn chk actual results etc crontab root sbin blabla root sbin rhn chk last line in the file deleted and replaced ,1 1322,5658289757.0,IssuesEvent,2017-04-10 09:40:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_ami not handling wait:no and tags correctly,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I have three problems with the current execution: 1. wait: no still waits. It still waits till the AMI is available or a time_out. 2. As shown below, the tags are present but not displayed as output. 3. When the wait time expires, the AMI is created however the tags are ignored, suggesting they are only added after the wait has expired. I would expect this to happen before on wait: no ##### STEPS TO REPRODUCE Playbook: ``` - name: create AMI backup ec2_ami: region: ""{{ ec2_region }}"" instance_id: ""{{ ec2_id }}"" wait: no no_reboot: yes name: ""{{ ec2_tag_Name }}"" tags: creation: ""{{ ansible_date_time.epoch }}"" expiration: ""{{ expiration_date.stdout }}"" register: output ``` ##### EXPECTED RESULTS I expected the following results: 1. The playbook moves on and does not wait for results 2. Tags are shown in the output 3. Tags show in the AWS console after a time_out has been reached. ##### ACTUAL RESULTS Output on a normal run (after waiting for the task to complete): ``` TASK [create AMI backup] ******************************************************* changed: [*.*.*.*] TASK [debug] ******************************************************************* ok: [*.*.*.*] => { ""output"": { ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-*******"", ""volume_type"": ""gp2"" } }, ""changed"": true, ""creationDate"": ""2016-06-29T09:31:56.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""image_id"": ""ami-******"", ""is_public"": false, ""location"": ""*******/*****-2016-06-monthly"", ""msg"": ""AMI creation operation complete"", ""ownerId"": ""*******"", ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""hvm"" } } ``` In the situation where a time_out occurs: ``` 09:12:36 TASK [create AMI backup] ******************************************************* 09:28:18 fatal: [*.*.*.* -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help.""} ``` Also notice how this is by no means the 300 seconds ( 5 minutes ) specified as the default. I think this module needs a good once over to verify the combination of delegation, tags and wait functions properly. Thanks! ",True,"ec2_ami not handling wait:no and tags correctly - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I have three problems with the current execution: 1. wait: no still waits. It still waits till the AMI is available or a time_out. 2. As shown below, the tags are present but not displayed as output. 3. When the wait time expires, the AMI is created however the tags are ignored, suggesting they are only added after the wait has expired. I would expect this to happen before on wait: no ##### STEPS TO REPRODUCE Playbook: ``` - name: create AMI backup ec2_ami: region: ""{{ ec2_region }}"" instance_id: ""{{ ec2_id }}"" wait: no no_reboot: yes name: ""{{ ec2_tag_Name }}"" tags: creation: ""{{ ansible_date_time.epoch }}"" expiration: ""{{ expiration_date.stdout }}"" register: output ``` ##### EXPECTED RESULTS I expected the following results: 1. The playbook moves on and does not wait for results 2. Tags are shown in the output 3. Tags show in the AWS console after a time_out has been reached. ##### ACTUAL RESULTS Output on a normal run (after waiting for the task to complete): ``` TASK [create AMI backup] ******************************************************* changed: [*.*.*.*] TASK [debug] ******************************************************************* ok: [*.*.*.*] => { ""output"": { ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-*******"", ""volume_type"": ""gp2"" } }, ""changed"": true, ""creationDate"": ""2016-06-29T09:31:56.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""image_id"": ""ami-******"", ""is_public"": false, ""location"": ""*******/*****-2016-06-monthly"", ""msg"": ""AMI creation operation complete"", ""ownerId"": ""*******"", ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""hvm"" } } ``` In the situation where a time_out occurs: ``` 09:12:36 TASK [create AMI backup] ******************************************************* 09:28:18 fatal: [*.*.*.* -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help.""} ``` Also notice how this is by no means the 300 seconds ( 5 minutes ) specified as the default. I think this module needs a good once over to verify the combination of delegation, tags and wait functions properly. Thanks! ",1, ami not handling wait no and tags correctly issue type bug report component name ami py ansible version ansible configuration n a os environment n a summary i have three problems with the current execution wait no still waits it still waits till the ami is available or a time out as shown below the tags are present but not displayed as output when the wait time expires the ami is created however the tags are ignored suggesting they are only added after the wait has expired i would expect this to happen before on wait no steps to reproduce playbook name create ami backup ami region region instance id id wait no no reboot yes name tag name tags creation ansible date time epoch expiration expiration date stdout register output expected results i expected the following results the playbook moves on and does not wait for results tags are shown in the output tags show in the aws console after a time out has been reached actual results output on a normal run after waiting for the task to complete task changed task ok output architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type changed true creationdate description null hypervisor xen image id ami is public false location monthly msg ami creation operation complete ownerid root device name dev root device type ebs state available tags virtualization type hvm in the situation where a time out occurs task fatal failed changed false failed true msg error while trying to find the new image using wait yes and or a longer wait timeout may help also notice how this is by no means the seconds minutes specified as the default i think this module needs a good once over to verify the combination of delegation tags and wait functions properly thanks ,1 878,4541245148.0,IssuesEvent,2016-09-09 17:09:11,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure_rm_virualmashine issue,affects_2.1 azure bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME azure_rm_virtualmachine module ##### ANSIBLE VERSION ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT fedora 23 ##### SUMMARY ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` ##### EXPECTED RESULTS creatiion of VM. ##### ACTUAL RESULTS PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"azure_rm_virualmashine issue - ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME azure_rm_virtualmachine module ##### ANSIBLE VERSION ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT fedora 23 ##### SUMMARY ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` ##### EXPECTED RESULTS creatiion of VM. ##### ACTUAL RESULTS PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,azure rm virualmashine issue issue type bug report feature idea documentation report component name azure rm virtualmachine module ansible version ansible noarch configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables python modules azure os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific fedora summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost connection local gather facts false become false vars files environments azure azure credentials encrypted yml inventory environments azure azure credentials encrypted temp passwd yml vars roles create azure vm and roles create azure vm main yml name create vm with defaults azure rm virtualmachine resource group testing name admin username test user admin password test vm image offer centos publisher openlogic sku version latest expected results creatiion of vm actual results playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed ,1 1691,6574180292.0,IssuesEvent,2017-09-11 11:51:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,vsphere_guest,affects_2.2 bug_report cloud vmware waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT MacOS X Sierra ##### SUMMARY vpshere_guest requires to provide esxi hostname and datacenter, but within our organisation we don't have rights to create VM on the host, only in Resource Pool ##### STEPS TO REPRODUCE ansible-playbook createvm_new.yml ``` --- - name: Create a VM in resource pool hosts: localhost connection: local gather_facts: False vars_prompt: - name: ""user"" prompt: ""Enter your username to virtualcenter"" private: no - name: ""password"" prompt: ""Enter your password to virtualcenter"" private: yes - name: ""guest"" prompt: ""Enter you guest VM name: "" private: no tasks: - name: create VM vsphere_guest: vcenter_hostname: virtualcenter.example.com validate_certs: no username: '{{ user }}' password: '{{ password }}' guest: '{{ guest }}' state: powered_off vm_extra_config: vcpu.hotadd: yes mem.hotadd: yes notes: This is a test VM vm_disk: disk1: size_gb: 10 type: thick datastore: my_datastore vm_nic: nic1: type: vmxnet3 network: GL-Network - Temp network_type: standard vm_hardware: memory_mb: 1024 num_cpus: 1 osid: centos64Guest scsi: paravirtual resource_pool: ""/Resources/GL - VMware - Team "" esxi: datacenter: my_site hostname: myesxhost.example.com ``` ##### EXPECTED RESULTS I expected that i only provide hardware details and Resource pool to use ##### ACTUAL RESULTS Ansible threw an exception cause of permission denied. If i comment out the esxi part it tells about missing key-pair ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1879, in \n main()\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1867, in main\n state=state\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1414, in create_vm\n taskmor = vsphere_client._proxy.CreateVM_Task(create_vm_request)._returnval\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/resources/VimService_services.py\"", line 1094, in CreateVM_Task\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 545, in Receive\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 464, in Receive\npysphere.ZSI.FaultException: Permission to perform this operation was denied.\n\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"vsphere_guest - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT MacOS X Sierra ##### SUMMARY vpshere_guest requires to provide esxi hostname and datacenter, but within our organisation we don't have rights to create VM on the host, only in Resource Pool ##### STEPS TO REPRODUCE ansible-playbook createvm_new.yml ``` --- - name: Create a VM in resource pool hosts: localhost connection: local gather_facts: False vars_prompt: - name: ""user"" prompt: ""Enter your username to virtualcenter"" private: no - name: ""password"" prompt: ""Enter your password to virtualcenter"" private: yes - name: ""guest"" prompt: ""Enter you guest VM name: "" private: no tasks: - name: create VM vsphere_guest: vcenter_hostname: virtualcenter.example.com validate_certs: no username: '{{ user }}' password: '{{ password }}' guest: '{{ guest }}' state: powered_off vm_extra_config: vcpu.hotadd: yes mem.hotadd: yes notes: This is a test VM vm_disk: disk1: size_gb: 10 type: thick datastore: my_datastore vm_nic: nic1: type: vmxnet3 network: GL-Network - Temp network_type: standard vm_hardware: memory_mb: 1024 num_cpus: 1 osid: centos64Guest scsi: paravirtual resource_pool: ""/Resources/GL - VMware - Team "" esxi: datacenter: my_site hostname: myesxhost.example.com ``` ##### EXPECTED RESULTS I expected that i only provide hardware details and Resource pool to use ##### ACTUAL RESULTS Ansible threw an exception cause of permission denied. If i comment out the esxi part it tells about missing key-pair ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1879, in \n main()\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1867, in main\n state=state\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1414, in create_vm\n taskmor = vsphere_client._proxy.CreateVM_Task(create_vm_request)._returnval\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/resources/VimService_services.py\"", line 1094, in CreateVM_Task\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 545, in Receive\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 464, in Receive\npysphere.ZSI.FaultException: Permission to perform this operation was denied.\n\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,vsphere guest issue type bug report component name vsphere guest ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific macos x sierra summary vpshere guest requires to provide esxi hostname and datacenter but within our organisation we don t have rights to create vm on the host only in resource pool steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible playbook createvm new yml name create a vm in resource pool hosts localhost connection local gather facts false vars prompt name user prompt enter your username to virtualcenter private no name password prompt enter your password to virtualcenter private yes name guest prompt enter you guest vm name private no tasks name create vm vsphere guest vcenter hostname virtualcenter example com validate certs no username user password password guest guest state powered off vm extra config vcpu hotadd yes mem hotadd yes notes this is a test vm vm disk size gb type thick datastore my datastore vm nic type network gl network temp network type standard vm hardware memory mb num cpus osid scsi paravirtual resource pool resources gl vmware team esxi datacenter my site hostname myesxhost example com expected results i expected that i only provide hardware details and resource pool to use actual results ansible threw an exception cause of permission denied if i comment out the esxi part it tells about missing key pair an exception occurred during task execution to see the full traceback use vvv the error was fatal failed changed false failed true module stderr traceback most recent call last n file var folders sz t ansible ansible module vsphere guest py line in n main n file var folders sz t ansible ansible module vsphere guest py line in main n state state n file var folders sz t ansible ansible module vsphere guest py line in create vm n taskmor vsphere client proxy createvm task create vm request returnval n file build bdist macosx egg pysphere resources vimservice services py line in createvm task n file build bdist macosx egg pysphere zsi client py line in receive n file build bdist macosx egg pysphere zsi client py line in receive npysphere zsi faultexception permission to perform this operation was denied n n module stdout msg module failure parsed false ,1 1116,4989040815.0,IssuesEvent,2016-12-08 10:27:37,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,please add more options to authorized_key,affects_2.1 feature_idea waiting_on_maintainer,"##### ISSUE TYPE Feature Idea ##### COMPONENT NAME authorized_key module ##### ANSIBLE VERSION N/A ##### SUMMARY I'd like to be able to specify: 1) owner/group of the AuthorizedKeysFile directory (currently weird: it's the last processes users, in my case where I iterate over a list of user+keys and put them in /etc/ssh/authorized_keys) 2) mode of the file: there is rarely point in having it 0600 (the standard, ok, but still).... 3) owner/group of the file: let's say I don't want to let the user to be able to change the file on their own. ",True,"please add more options to authorized_key - ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME authorized_key module ##### ANSIBLE VERSION N/A ##### SUMMARY I'd like to be able to specify: 1) owner/group of the AuthorizedKeysFile directory (currently weird: it's the last processes users, in my case where I iterate over a list of user+keys and put them in /etc/ssh/authorized_keys) 2) mode of the file: there is rarely point in having it 0600 (the standard, ok, but still).... 3) owner/group of the file: let's say I don't want to let the user to be able to change the file on their own. ",1,please add more options to authorized key issue type feature idea component name authorized key module ansible version n a summary i d like to be able to specify owner group of the authorizedkeysfile directory currently weird it s the last processes users in my case where i iterate over a list of user keys and put them in etc ssh authorized keys mode of the file there is rarely point in having it the standard ok but still owner group of the file let s say i don t want to let the user to be able to change the file on their own ,1 1777,6575809800.0,IssuesEvent,2017-09-11 17:24:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"mysql_user invalid privileges string: Invalid privileges specified: frozenset(['\""REPLICATION SLAVE\""'])",affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION No custom configuration ##### OS / ENVIRONMENT OSX 10.11.6 ##### SUMMARY mysql_user throws `invalid privileges string` for the following task ``` - name: create repl mysql user mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:""REPLICATION SLAVE"",REQUIRESSL ``` ##### STEPS TO REPRODUCE Run the following task, MySQL version is `5.5.51` ``` - name: create repl mysql user mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:""REPLICATION SLAVE"",REQUIRESSL ``` ##### EXPECTED RESULTS Creation of `repl` user to perform mysql replication. ##### ACTUAL RESULTS ``` TASK [mysql : create repl mysql user] ****************************************** task path: /Users/dbusby/Documents/Projects/Github/*******/playbooks/roles/mysql/tasks/main.yml:56 <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ***** <127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `"" && echo ansible-tmp-1475578388.34-258119882161583=""` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /var/folders/2g/cfqb94g549q05wndh_w9h7sh0000gn/T/tmpXkrbnY TO /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user <127.0.0.1> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ***** <127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '""'""'chmod u+x /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/ /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user && sleep 0'""'""'' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ***** <127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-bmocsvhjerihxuoppgzgxoupupdtgxgi; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user; rm -rf ""/home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [***-dev]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": false, ""config_file"": ""/root/.my.cnf"", ""connect_timeout"": 30, ""encrypted"": false, ""host"": ""%"", ""host_all"": false, ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""repl"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": ""*.*:\""REPLICATION SLAVE\"",REQUIRESSL"", ""sql_log_bin"": true, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""repl""}, ""module_name"": ""mysql_user""}, ""msg"": ""invalid privileges string: Invalid privileges specified: frozenset(['\""REPLICATION SLAVE\""'])""} ``` ",True,"mysql_user invalid privileges string: Invalid privileges specified: frozenset(['\""REPLICATION SLAVE\""']) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION No custom configuration ##### OS / ENVIRONMENT OSX 10.11.6 ##### SUMMARY mysql_user throws `invalid privileges string` for the following task ``` - name: create repl mysql user mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:""REPLICATION SLAVE"",REQUIRESSL ``` ##### STEPS TO REPRODUCE Run the following task, MySQL version is `5.5.51` ``` - name: create repl mysql user mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:""REPLICATION SLAVE"",REQUIRESSL ``` ##### EXPECTED RESULTS Creation of `repl` user to perform mysql replication. ##### ACTUAL RESULTS ``` TASK [mysql : create repl mysql user] ****************************************** task path: /Users/dbusby/Documents/Projects/Github/*******/playbooks/roles/mysql/tasks/main.yml:56 <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ***** <127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `"" && echo ansible-tmp-1475578388.34-258119882161583=""` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /var/folders/2g/cfqb94g549q05wndh_w9h7sh0000gn/T/tmpXkrbnY TO /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user <127.0.0.1> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ***** <127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '""'""'chmod u+x /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/ /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user && sleep 0'""'""'' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ***** <127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-bmocsvhjerihxuoppgzgxoupupdtgxgi; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user; rm -rf ""/home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [***-dev]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": false, ""config_file"": ""/root/.my.cnf"", ""connect_timeout"": 30, ""encrypted"": false, ""host"": ""%"", ""host_all"": false, ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""repl"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": ""*.*:\""REPLICATION SLAVE\"",REQUIRESSL"", ""sql_log_bin"": true, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""repl""}, ""module_name"": ""mysql_user""}, ""msg"": ""invalid privileges string: Invalid privileges specified: frozenset(['\""REPLICATION SLAVE\""'])""} ``` ",1,mysql user invalid privileges string invalid privileges specified frozenset issue type bug report component name mysql user ansible version ansible config file configured module search path default w o overrides configuration no custom configuration os environment osx summary mysql user throws invalid privileges string for the following task name create repl mysql user mysql user name repl password mysql repl password host priv replication slave requiressl steps to reproduce run the following task mysql version is name create repl mysql user mysql user name repl password mysql repl password host priv replication slave requiressl expected results creation of repl user to perform mysql replication actual results task task path users dbusby documents projects github playbooks roles mysql tasks main yml establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpxkrbny to home ansible tmp ansible tmp mysql user ssh exec sftp b c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r bin sh c chmod u x home ansible tmp ansible tmp home ansible tmp ansible tmp mysql user sleep establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success bmocsvhjerihxuoppgzgxoupupdtgxgi lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp mysql user rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args append privs false check implicit admin false config file root my cnf connect timeout encrypted false host host all false login host localhost login password null login port login unix socket null login user null name repl password value specified in no log parameter priv replication slave requiressl sql log bin true ssl ca null ssl cert null ssl key null state present update password always user repl module name mysql user msg invalid privileges string invalid privileges specified frozenset ,1 1119,4989596269.0,IssuesEvent,2016-12-08 12:26:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container doesn't allow setting log_options without setting log_driver,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 28feba2fb3) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/core: (detached HEAD ae6992bf8c) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/extras: (detached HEAD afd0b23836) last updated 2016/09/14 12:25:32 (GMT +200) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Docker uses the log driver 'json-file' by default (also stated in the docker_container documentation). If you want to provide log_options (e.g. for log rotation) you have to explicitly set the log_driver to json-file, else this module attribute is ignored, although it should IMHO assume the Docker default. This behavior is probably caused by the [following code](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_container.py#L1091) ``` def _parse_log_config(self): ''' Create a LogConfig object ''' if self.log_driver is None: return None ``` ##### STEPS TO REPRODUCE ``` docker_container: name: mongodb image: bitnami/mongodb log_options: max-size: ""512m"" max-file: ""5"" ``` ##### EXPECTED RESULTS When you check the specs of the running container (docker inspect) you see should see the passed log options. ``` ""LogConfig"": { ""Type"": ""json-file"", ""Config"": { ""max-file"": ""5"", ""max-size"": ""512m"" } ``` ##### ACTUAL RESULTS When you check the specs of the running container you see that the log config block is empty. ``` ""LogConfig"": { ""Type"": ""json-file"", ""Config"": {} }, ``` ",True,"docker_container doesn't allow setting log_options without setting log_driver - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 28feba2fb3) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/core: (detached HEAD ae6992bf8c) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/extras: (detached HEAD afd0b23836) last updated 2016/09/14 12:25:32 (GMT +200) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Docker uses the log driver 'json-file' by default (also stated in the docker_container documentation). If you want to provide log_options (e.g. for log rotation) you have to explicitly set the log_driver to json-file, else this module attribute is ignored, although it should IMHO assume the Docker default. This behavior is probably caused by the [following code](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_container.py#L1091) ``` def _parse_log_config(self): ''' Create a LogConfig object ''' if self.log_driver is None: return None ``` ##### STEPS TO REPRODUCE ``` docker_container: name: mongodb image: bitnami/mongodb log_options: max-size: ""512m"" max-file: ""5"" ``` ##### EXPECTED RESULTS When you check the specs of the running container (docker inspect) you see should see the passed log options. ``` ""LogConfig"": { ""Type"": ""json-file"", ""Config"": { ""max-file"": ""5"", ""max-size"": ""512m"" } ``` ##### ACTUAL RESULTS When you check the specs of the running container you see that the log config block is empty. ``` ""LogConfig"": { ""Type"": ""json-file"", ""Config"": {} }, ``` ",1,docker container doesn t allow setting log options without setting log driver issue type bug report component name docker container ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt configuration n a os environment n a summary docker uses the log driver json file by default also stated in the docker container documentation if you want to provide log options e g for log rotation you have to explicitly set the log driver to json file else this module attribute is ignored although it should imho assume the docker default this behavior is probably caused by the def parse log config self create a logconfig object if self log driver is none return none steps to reproduce docker container name mongodb image bitnami mongodb log options max size max file expected results when you check the specs of the running container docker inspect you see should see the passed log options logconfig type json file config max file max size actual results when you check the specs of the running container you see that the log config block is empty logconfig type json file config ,1 1301,5541936639.0,IssuesEvent,2017-03-22 14:02:21,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Feature request: allow iterating listeners with with_items inside ec2_elb_lb task,affects_2.1 aws cloud feature_idea waiting_on_maintainer,"ISSUE TYPE Feature Idea ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides CONFIGURATION $ egrep -v ""^#|^$"" /etc/ansible/ansible.cfg [defaults] gathering = explicit host_key_checking = False callback_whitelist = profile_tasks remote_user = ec2-user private_key_file = /Users/myuser/.ssh/mykey.pem ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} display_skipped_hosts = False command_warnings = True retry_files_enabled = False squash_actions = apk,apt,dnf,package,pacman,pkgng,yum,zypper [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] Env vars: export ANSIBLE_HOST_KEY_CHECKING='false' export AWS_REGION='eu-central-1' and the obvious AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY ``` OS / ENVIRONMENT From Mac OS X Capitan to N/a (AWS) SUMMARY I need to create an elastic load balancer which listens on a lot of ports (i.e. 100+). I am aware that AWS ELBs do not allow for port ranges in their ELB module (just on security groups), so I would like to do something like this: ``` - name: Create ELB for sender servers local_action: module: ec2_elb_lb name: ""{{ elb_sender }}"" state: present zones: ""{{ availability_zones }}"" tags: Name: ""{{ elb_sender }}"" listeners: - protocol: tcp load_balancer_port: ""{{ item }}"" instance_port: ""{{ item }}"" proxy_protocol: True cross_az_load_balancing: ""yes"" security_group_names: ""{{ security_group_sender }}"" wait: yes with_items: - 80 - 81 ``` Which does not error out but only creates an ELB listening on the last port on the items list. STEPS TO REPRODUCE The new feature would be as above. ``` - name: Create ELB for sender servers local_action: module: ec2_elb_lb name: ""{{ elb_sender }}"" state: present zones: ""{{ availability_zones }}"" tags: Name: ""{{ elb_sender }}"" listeners: - protocol: tcp load_balancer_port: ""{{ item }}"" instance_port: ""{{ item }}"" proxy_protocol: True cross_az_load_balancing: ""yes"" security_group_names: ""{{ security_group_sender }}"" wait: yes with_items: - 80 - 81 ``` EXPECTED RESULTS I expected/hoped for a loop of the listeners part of the task. ACTUAL RESULTS An ELB with a listener on port 81. Thanks for considering it. ",True,"Feature request: allow iterating listeners with with_items inside ec2_elb_lb task - ISSUE TYPE Feature Idea ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides CONFIGURATION $ egrep -v ""^#|^$"" /etc/ansible/ansible.cfg [defaults] gathering = explicit host_key_checking = False callback_whitelist = profile_tasks remote_user = ec2-user private_key_file = /Users/myuser/.ssh/mykey.pem ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} display_skipped_hosts = False command_warnings = True retry_files_enabled = False squash_actions = apk,apt,dnf,package,pacman,pkgng,yum,zypper [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] Env vars: export ANSIBLE_HOST_KEY_CHECKING='false' export AWS_REGION='eu-central-1' and the obvious AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY ``` OS / ENVIRONMENT From Mac OS X Capitan to N/a (AWS) SUMMARY I need to create an elastic load balancer which listens on a lot of ports (i.e. 100+). I am aware that AWS ELBs do not allow for port ranges in their ELB module (just on security groups), so I would like to do something like this: ``` - name: Create ELB for sender servers local_action: module: ec2_elb_lb name: ""{{ elb_sender }}"" state: present zones: ""{{ availability_zones }}"" tags: Name: ""{{ elb_sender }}"" listeners: - protocol: tcp load_balancer_port: ""{{ item }}"" instance_port: ""{{ item }}"" proxy_protocol: True cross_az_load_balancing: ""yes"" security_group_names: ""{{ security_group_sender }}"" wait: yes with_items: - 80 - 81 ``` Which does not error out but only creates an ELB listening on the last port on the items list. STEPS TO REPRODUCE The new feature would be as above. ``` - name: Create ELB for sender servers local_action: module: ec2_elb_lb name: ""{{ elb_sender }}"" state: present zones: ""{{ availability_zones }}"" tags: Name: ""{{ elb_sender }}"" listeners: - protocol: tcp load_balancer_port: ""{{ item }}"" instance_port: ""{{ item }}"" proxy_protocol: True cross_az_load_balancing: ""yes"" security_group_names: ""{{ security_group_sender }}"" wait: yes with_items: - 80 - 81 ``` EXPECTED RESULTS I expected/hoped for a loop of the listeners part of the task. ACTUAL RESULTS An ELB with a listener on port 81. Thanks for considering it. ",1,feature request allow iterating listeners with with items inside elb lb task issue type feature idea ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration egrep v etc ansible ansible cfg gathering explicit host key checking false callback whitelist profile tasks remote user user private key file users myuser ssh mykey pem ansible managed ansible managed file modified on y m d h m s by uid on host display skipped hosts false command warnings true retry files enabled false squash actions apk apt dnf package pacman pkgng yum zypper env vars export ansible host key checking false export aws region eu central and the obvious aws access key id and aws secret access key os environment from mac os x capitan to n a aws summary i need to create an elastic load balancer which listens on a lot of ports i e i am aware that aws elbs do not allow for port ranges in their elb module just on security groups so i would like to do something like this name create elb for sender servers local action module elb lb name elb sender state present zones availability zones tags name elb sender listeners protocol tcp load balancer port item instance port item proxy protocol true cross az load balancing yes security group names security group sender wait yes with items which does not error out but only creates an elb listening on the last port on the items list steps to reproduce the new feature would be as above name create elb for sender servers local action module elb lb name elb sender state present zones availability zones tags name elb sender listeners protocol tcp load balancer port item instance port item proxy protocol true cross az load balancing yes security group names security group sender wait yes with items expected results i expected hoped for a loop of the listeners part of the task actual results an elb with a listener on port thanks for considering it ,1 1622,6572646371.0,IssuesEvent,2017-09-11 04:02:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,user module should allow setting of primary group id,affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Component Name: user module ##### Ansible Version: ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Environment: Most relevant to Unix targets. ##### Summary: A common pattern (e.g. it's the default behaviour in Debian) is to create user and group with the same name and id - e.g. user mcv21 group mcv21 have the same UID. The user module is unable to do this - you can specify user id and primary group, but not the id of the primary group ##### Steps To Reproduce: An example usage: ``` - name: create a user user: name=foo uid=1021 group=foo gid=1021 ``` ",True,"user module should allow setting of primary group id - ##### Issue Type: - Bug Report ##### Component Name: user module ##### Ansible Version: ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Environment: Most relevant to Unix targets. ##### Summary: A common pattern (e.g. it's the default behaviour in Debian) is to create user and group with the same name and id - e.g. user mcv21 group mcv21 have the same UID. The user module is unable to do this - you can specify user id and primary group, but not the id of the primary group ##### Steps To Reproduce: An example usage: ``` - name: create a user user: name=foo uid=1021 group=foo gid=1021 ``` ",1,user module should allow setting of primary group id issue type bug report component name user module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides environment most relevant to unix targets summary a common pattern e g it s the default behaviour in debian is to create user and group with the same name and id e g user group have the same uid the user module is unable to do this you can specify user id and primary group but not the id of the primary group steps to reproduce an example usage name create a user user name foo uid group foo gid ,1 1721,6574493498.0,IssuesEvent,2017-09-11 13:05:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,dellos9_command ansible hangs after reload command issued to remote device.,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME dellos9_command ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` --- - name: AGG base config hosts: baseagg vars: node: agg cli: host: ""{{ ansible_host }}"" transport: cli username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub roles: - deployconfig roles/deployconfig/ ├── tasks │   └── main.yml - name: reload dellos9_command: provider: ""{{ cli }}"" commands: ""reload no-confirm"" ``` ##### OS / ENVIRONMENT ``` Linux ansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY Create a task where the dellos9_command can issue a remote reload of the switch. Ansible will never timeout after the switch has rebooted. ##### STEPS TO REPRODUCE ``` ansible-playbook masdbaseconfig.yml --limit s6000 -vvvvv ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` After waiting 60+ minutes for the job to fail, I had to issue a ctrl+c to break the process. TASK [deployconfig : reload] *************************************************** task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/roles/deployconfig/tasks/main.yml:31 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/dellos9/dellos9_command.py <10.10.234.96> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.234.96> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675 `"" && echo ansible-tmp-1478116357.69-28845854088675=""` echo $HOME/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675 `"" ) && sleep 0' <10.10.234.96> PUT /tmp/tmpWKMxtR TO /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/dellos9_command.py <10.10.234.96> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/ /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/dellos9_command.py && sleep 0' <10.10.234.96> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/dellos9_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/"" > /dev/null 2>&1 && sleep 0' CProcess WorkerProcess-19: Traceback (most recent call last): File ""/usr/lib/python2.7/multiprocessing/process.py"", line 258, in _bootstrap self.run() File ""/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py"", line 112, in run self._rslt_q File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 490, in _execute result = self._handler.run(task_vars=variables) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/normal.py"", line 33, in run results = merge_hash(results, self._execute_module(tmp=tmp, task_vars=task_vars)) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py"", line 643, in _execute_module res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py"", line 754, in _low_level_execute_command rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/connection/local.py"", line 114, in exec_command stdout, stderr = p.communicate(in_data) File ""/usr/lib/python2.7/subprocess.py"", line 800, in communicate return self._communicate(input) File ""/usr/lib/python2.7/subprocess.py"", line 1417, in _communicate stdout, stderr = self._communicate_with_poll(input) File ""/usr/lib/python2.7/subprocess.py"", line 1471, in _communicate_with_poll ready = poller.poll() KeyboardInterrupt [ERROR]: User interrupted execution ``` ",True,"dellos9_command ansible hangs after reload command issued to remote device. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME dellos9_command ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` --- - name: AGG base config hosts: baseagg vars: node: agg cli: host: ""{{ ansible_host }}"" transport: cli username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub roles: - deployconfig roles/deployconfig/ ├── tasks │   └── main.yml - name: reload dellos9_command: provider: ""{{ cli }}"" commands: ""reload no-confirm"" ``` ##### OS / ENVIRONMENT ``` Linux ansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY Create a task where the dellos9_command can issue a remote reload of the switch. Ansible will never timeout after the switch has rebooted. ##### STEPS TO REPRODUCE ``` ansible-playbook masdbaseconfig.yml --limit s6000 -vvvvv ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` After waiting 60+ minutes for the job to fail, I had to issue a ctrl+c to break the process. TASK [deployconfig : reload] *************************************************** task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/roles/deployconfig/tasks/main.yml:31 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/dellos9/dellos9_command.py <10.10.234.96> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.234.96> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675 `"" && echo ansible-tmp-1478116357.69-28845854088675=""` echo $HOME/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675 `"" ) && sleep 0' <10.10.234.96> PUT /tmp/tmpWKMxtR TO /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/dellos9_command.py <10.10.234.96> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/ /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/dellos9_command.py && sleep 0' <10.10.234.96> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/dellos9_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478116357.69-28845854088675/"" > /dev/null 2>&1 && sleep 0' CProcess WorkerProcess-19: Traceback (most recent call last): File ""/usr/lib/python2.7/multiprocessing/process.py"", line 258, in _bootstrap self.run() File ""/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py"", line 112, in run self._rslt_q File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 490, in _execute result = self._handler.run(task_vars=variables) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/normal.py"", line 33, in run results = merge_hash(results, self._execute_module(tmp=tmp, task_vars=task_vars)) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py"", line 643, in _execute_module res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py"", line 754, in _low_level_execute_command rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable) File ""/usr/lib/python2.7/dist-packages/ansible/plugins/connection/local.py"", line 114, in exec_command stdout, stderr = p.communicate(in_data) File ""/usr/lib/python2.7/subprocess.py"", line 800, in communicate return self._communicate(input) File ""/usr/lib/python2.7/subprocess.py"", line 1417, in _communicate stdout, stderr = self._communicate_with_poll(input) File ""/usr/lib/python2.7/subprocess.py"", line 1471, in _communicate_with_poll ready = poller.poll() KeyboardInterrupt [ERROR]: User interrupted execution ``` ",1, command ansible hangs after reload command issued to remote device issue type bug report component name command ansible version ansible config file home emarq solutions network automation mas ansible dell ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables name agg base config hosts baseagg vars node agg cli host ansible host transport cli username admin ssh keyfile srv tftpboot my rsa pub roles deployconfig roles deployconfig ├── tasks │   └── main yml name reload command provider cli commands reload no confirm os environment linux ansible generic ubuntu smp wed oct utc gnu linux summary create a task where the command can issue a remote reload of the switch ansible will never timeout after the switch has rebooted steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible playbook masdbaseconfig yml limit vvvvv expected results actual results after waiting minutes for the job to fail i had to issue a ctrl c to break the process task task path home emarq solutions network automation mas ansible dell roles deployconfig tasks main yml using module file usr lib dist packages ansible modules core network command py establish local connection for user emarq exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpwkmxtr to home emarq ansible tmp ansible tmp command py exec bin sh c chmod u x home emarq ansible tmp ansible tmp home emarq ansible tmp ansible tmp command py sleep exec bin sh c usr bin python home emarq ansible tmp ansible tmp command py rm rf home emarq ansible tmp ansible tmp dev null sleep cprocess workerprocess traceback most recent call last file usr lib multiprocessing process py line in bootstrap self run file usr lib dist packages ansible executor process worker py line in run self rslt q file usr lib dist packages ansible executor task executor py line in run res self execute file usr lib dist packages ansible executor task executor py line in execute result self handler run task vars variables file usr lib dist packages ansible plugins action normal py line in run results merge hash results self execute module tmp tmp task vars task vars file usr lib dist packages ansible plugins action init py line in execute module res self low level execute command cmd sudoable sudoable in data in data file usr lib dist packages ansible plugins action init py line in low level execute command rc stdout stderr self connection exec command cmd in data in data sudoable sudoable file usr lib dist packages ansible plugins connection local py line in exec command stdout stderr p communicate in data file usr lib subprocess py line in communicate return self communicate input file usr lib subprocess py line in communicate stdout stderr self communicate with poll input file usr lib subprocess py line in communicate with poll ready poller poll keyboardinterrupt user interrupted execution ,1 748,4351169776.0,IssuesEvent,2016-07-31 18:06:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker: _human_to_bytes broken on current devel head,bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME _docker module ##### ANSIBLE VERSION N/A ##### SUMMARY As of 8d126bd877444c9557b1671521516447cc557d3f the _human_to_bytes() function in docker.py is failing to work at all due to throwing the exception that ""Cannot convert 0 into an integer"" Without digging too deep on why, it looks like the simple case of the value defaulting to 0 isn't caught and handled properly.",True,"docker: _human_to_bytes broken on current devel head - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME _docker module ##### ANSIBLE VERSION N/A ##### SUMMARY As of 8d126bd877444c9557b1671521516447cc557d3f the _human_to_bytes() function in docker.py is failing to work at all due to throwing the exception that ""Cannot convert 0 into an integer"" Without digging too deep on why, it looks like the simple case of the value defaulting to 0 isn't caught and handled properly.",1,docker human to bytes broken on current devel head issue type bug report component name docker module ansible version n a summary as of the human to bytes function in docker py is failing to work at all due to throwing the exception that cannot convert into an integer without digging too deep on why it looks like the simple case of the value defaulting to isn t caught and handled properly ,1 1055,4864134304.0,IssuesEvent,2016-11-14 17:09:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,yum module saying packages are up to date when it's not,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT CentOS release 6.7 (Final) ##### SUMMARY yum module says package is up to date while it isn't. An update with the command module works. I'm sure there's a new version because Maven deploys it on a Nexus repo and yum check-update says there's a new version. ##### STEPS TO REPRODUCE ``` - name: Install RPM yum: name=app-name state=latest ``` ##### EXPECTED RESULTS Status changed. The following : ``` - name: Install RPM command: yum install -y app-name ``` Installs the new version correctly. Why doesn't the yum module do the same ? ##### ACTUAL RESULTS ``` ok: [host] => {""changed"": false, ""invocation"": {""module_args"": {""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [""app-name""], ""state"": ""latest"", ""update_cache"": false, ""validate_certs"": true}, ""module_name"": ""yum""}, ""msg"": "" Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API."", ""rc"": 0, ""results"": [""All packages providing app-name are up to date"", """"]} ``` ",True,"yum module saying packages are up to date when it's not - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT CentOS release 6.7 (Final) ##### SUMMARY yum module says package is up to date while it isn't. An update with the command module works. I'm sure there's a new version because Maven deploys it on a Nexus repo and yum check-update says there's a new version. ##### STEPS TO REPRODUCE ``` - name: Install RPM yum: name=app-name state=latest ``` ##### EXPECTED RESULTS Status changed. The following : ``` - name: Install RPM command: yum install -y app-name ``` Installs the new version correctly. Why doesn't the yum module do the same ? ##### ACTUAL RESULTS ``` ok: [host] => {""changed"": false, ""invocation"": {""module_args"": {""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [""app-name""], ""state"": ""latest"", ""update_cache"": false, ""validate_certs"": true}, ""module_name"": ""yum""}, ""msg"": "" Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API."", ""rc"": 0, ""results"": [""All packages providing app-name are up to date"", """"]} ``` ",1,yum module saying packages are up to date when it s not issue type bug report component name yum ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific centos release final summary yum module says package is up to date while it isn t an update with the command module works i m sure there s a new version because maven deploys it on a nexus repo and yum check update says there s a new version steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name install rpm yum name app name state latest expected results status changed the following name install rpm command yum install y app name installs the new version correctly why doesn t the yum module do the same actual results ok changed false invocation module args conf file null disable gpg check false disablerepo null enablerepo null exclude null install repoquery true list null name state latest update cache false validate certs true module name yum msg warning due to potential bad behaviour with rhnplugin and certificates used slower repoquery calls instead of yum api rc results ,1 1903,6577556021.0,IssuesEvent,2017-09-12 01:44:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"mount is not idempotent for bind mount (Ubuntu 14.04, ansible 2.0.1.0)",affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: mount ##### Ansible Version: ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: no changes from default ##### Environment: Ubuntu 14.04 control host and target host, but see below for exact steps to reproduce ##### Summary: Using the mount module to set up and mount a bind mount is not idempotent. ##### Steps To Reproduce: The following playbook illustrates the problem: ``` --- - hosts: all tasks: - file: path=/tmp/1 state=directory - file: path=/tmp/2 state=directory - mount: src=/tmp/1 name=/tmp/2 fstype=- opts=bind state=mounted ``` Here's a detailed recipe for reproducing this if you run into trouble. - Launch an instance in ec2 off the latest ubuntu trusty AMI. A t2.nano is sufficient - On the AMI: ``` sudo -s apt-get -y install software-properties-common apt-add-repository -y ppa:ansible/ansible apt-get update apt-get -y install ansible cd /tmp cat >a.yml <local <a.yml <local < ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME git module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /home/duncan/dev/agile/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I would like to set extra git options, but still use the ansible `git` module, rather than having to use `shell/command` instead. ##### STEPS TO REPRODUCE Specifically, I would like to be able to add a proxy config to a particular git clone. This could work in the same way as the `pip` module - for example: ``` yaml - name: Cloning repo git: repo: {{ git_repo }} version: 'dev' dest: {{ repo_dir }} extra_args: '--config ""http.proxy=proxyHost:proxyPort""' ``` but there are lots of other arguments that one might want to pass. ",True,"git: Add an extra_args option to the git module. - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME git module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /home/duncan/dev/agile/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I would like to set extra git options, but still use the ansible `git` module, rather than having to use `shell/command` instead. ##### STEPS TO REPRODUCE Specifically, I would like to be able to add a proxy config to a particular git clone. This could work in the same way as the `pip` module - for example: ``` yaml - name: Cloning repo git: repo: {{ git_repo }} version: 'dev' dest: {{ repo_dir }} extra_args: '--config ""http.proxy=proxyHost:proxyPort""' ``` but there are lots of other arguments that one might want to pass. ",1,git add an extra args option to the git module issue type feature idea component name git module ansible version ansible config file home duncan dev agile ansible cfg configured module search path default w o overrides configuration n a os environment n a summary i would like to set extra git options but still use the ansible git module rather than having to use shell command instead steps to reproduce specifically i would like to be able to add a proxy config to a particular git clone this could work in the same way as the pip module for example yaml name cloning repo git repo git repo version dev dest repo dir extra args config http proxy proxyhost proxyport but there are lots of other arguments that one might want to pass ,1 1053,4863884893.0,IssuesEvent,2016-11-14 16:31:50,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ec2 module: spot_wait_timeout exceeded, but spot instance launched anyway",affects_1.9 aws bug_report cloud waiting_on_maintainer,"Issue Type: Bug Report Ansible Version: 1.9.3 from https://launchpad.net/~ansible/+archive/ubuntu/ansible 2.0.0 (devel f0efe1ecb0) Ansible Configuration: default \ as installed Environment: Ubuntu 14.04 Summary: spot_wait_timeout exceeded and ec2 task failed, but spot instance launched anyway because spot request is not canceled on failure Steps To Reproduce: 1. Run playbook https://gist.github.com/kai11/09d9bb952d422348a006 2. Playbook will fail with message ""msg: wait for spot requests timeout on ..."" 3. Check spot requests in AWS console - t1.micro will be open and eventually will be converted to instance without any tags Expected Results: Cancel sport request on spot_wait_timeout Actual Results: Spot request still open ",True,"ec2 module: spot_wait_timeout exceeded, but spot instance launched anyway - Issue Type: Bug Report Ansible Version: 1.9.3 from https://launchpad.net/~ansible/+archive/ubuntu/ansible 2.0.0 (devel f0efe1ecb0) Ansible Configuration: default \ as installed Environment: Ubuntu 14.04 Summary: spot_wait_timeout exceeded and ec2 task failed, but spot instance launched anyway because spot request is not canceled on failure Steps To Reproduce: 1. Run playbook https://gist.github.com/kai11/09d9bb952d422348a006 2. Playbook will fail with message ""msg: wait for spot requests timeout on ..."" 3. Check spot requests in AWS console - t1.micro will be open and eventually will be converted to instance without any tags Expected Results: Cancel sport request on spot_wait_timeout Actual Results: Spot request still open ",1, module spot wait timeout exceeded but spot instance launched anyway issue type bug report ansible version from devel ansible configuration default as installed environment ubuntu summary spot wait timeout exceeded and task failed but spot instance launched anyway because spot request is not canceled on failure steps to reproduce run playbook playbook will fail with message msg wait for spot requests timeout on check spot requests in aws console micro will be open and eventually will be converted to instance without any tags expected results cancel sport request on spot wait timeout actual results spot request still open ,1 919,4622130203.0,IssuesEvent,2016-09-27 06:01:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_image module doesn't work with local registry,affects_2.1 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_image ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 16.04 LTS ##### SUMMARY I installed my own local registry on localhost 5000: ```` docker run -d -p 5000:5000 --restart=always registry:2 ``` and when triggering the docker_image module I get: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first.""} ``` I am able to push an image to this registry using docker push from the command line. ##### STEPS TO REPRODUCE I use the docker_image module in the following way, in a basic playbook: Tag the image: ``` docker tag my_image localhost:5000/my_image ``` Use the docker_image module: ``` - name: Try local registry docker_image: path: ""{{my_image_dir}}"" name: localhost:5000/my_image force: true state: present ``` I do curl ``` http://localhost:5000/v2/_catalog ``` and this returns fine, so the registry works at localhost:5000 but the image doesn't get pushed. ##### EXPECTED RESULTS Get the image to the local registry. ##### ACTUAL RESULTS ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first.""} ``` ",True,"docker_image module doesn't work with local registry - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_image ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 16.04 LTS ##### SUMMARY I installed my own local registry on localhost 5000: ```` docker run -d -p 5000:5000 --restart=always registry:2 ``` and when triggering the docker_image module I get: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first.""} ``` I am able to push an image to this registry using docker push from the command line. ##### STEPS TO REPRODUCE I use the docker_image module in the following way, in a basic playbook: Tag the image: ``` docker tag my_image localhost:5000/my_image ``` Use the docker_image module: ``` - name: Try local registry docker_image: path: ""{{my_image_dir}}"" name: localhost:5000/my_image force: true state: present ``` I do curl ``` http://localhost:5000/v2/_catalog ``` and this returns fine, so the registry works at localhost:5000 but the image doesn't get pushed. ##### EXPECTED RESULTS Get the image to the local registry. ##### ACTUAL RESULTS ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first.""} ``` ",1,docker image module doesn t work with local registry issue type bug report component name docker image ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu lts summary i installed my own local registry on localhost docker run d p restart always registry and when triggering the docker image module i get fatal failed changed false failed true msg error configuration for localhost not found try logging into localhost first i am able to push an image to this registry using docker push from the command line steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i use the docker image module in the following way in a basic playbook tag the image docker tag my image localhost my image use the docker image module name try local registry docker image path my image dir name localhost my image force true state present i do curl and this returns fine so the registry works at localhost but the image doesn t get pushed expected results get the image to the local registry actual results fatal failed changed false failed true msg error configuration for localhost not found try logging into localhost first ,1 1774,6575800094.0,IssuesEvent,2017-09-11 17:22:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"docker_container module - ""Error connecting container to network""",affects_2.1 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /Users/ret/Projects/servers/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] inventory = ./inventory% ``` ##### OS / ENVIRONMENT OSX 10.10.5 (Yosemite) and OSX Sierra ##### SUMMARY When trying to create docker containers from an ansible playbook and adding them to a docker network previously created from another playbook to give this container a static IP address I'm getting an error saying: ""Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'"" ##### STEPS TO REPRODUCE Create network playbook: ``` - name: Create private network command: docker network create --subnet=192.168.100.0/24 --ip-range=192.168.100.0/24 --gateway=192.168.100.1 -o parent=eth0 privnet when: privnet is defined and dockernets.stdout.find(privnet.name) == -1 ``` Create docker container playbook: ``` --- - file: path=/shared/config/plex state=directory mode=0755 owner=797 recurse=true - name: plex in docker docker_container: name: ""plex"" hostname: ""box1plex"" image: timhaak/plex state: started restart_policy: always pull: true networks: - name: privnet ipv4_address: 192.168.100.10 purge_networks: yes log_driver: syslog log_opt: tag: ""plex"" volumes: - /shared/config/plex:/config - /shared/plex:/data ``` ##### EXPECTED RESULTS I expect a successful container creation instead of an error. ##### ACTUAL RESULTS ``` TASK [plex : plex in docker] *************************************************** task path: /Users/ret/Projects/servers/roles/plex/tasks/main.yml:3 ESTABLISH SSH CONNECTION FOR USER: ret SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `"" && echo ansible-tmp-1475741968.88-156551952368602=""` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `"" ) && sleep 0'""'""'' PUT /var/folders/7t/0myxzv9j64z0y6r2vl3wwg6m0000gn/T/tmpWiTHhW TO /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r '[box.mydomain.com]' ESTABLISH SSH CONNECTION FOR USER: ret SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '""'""'chmod u+x /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/ /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container && sleep 0'""'""'' ESTABLISH SSH CONNECTION FOR USER: ret SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r -tt box.mydomain.com '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-prfzskmhsxhsivtoucrzsgcloglldjbv; LANG=es_ES.UTF-8 LC_ALL=es_ES.UTF-8 LC_MESSAGES=es_ES.UTF-8 /usr/bin/python /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container; rm -rf ""/home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [box.mydomain.com]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_version"": null, ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": ""box1plex"", ""image"": ""timhaak/plex"", ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""syslog"", ""log_opt"": {""tag"": ""plex""}, ""log_options"": {""tag"": ""plex""}, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""plex"", ""network_mode"": null, ""networks"": [{""id"": ""ab1a4406681e5fef4eef6409c6819615912b4b3c6ac5e6d0161b744a96d981d1"", ""ipv4_address"": ""192.168.100.10"", ""name"": ""privnet""}], ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": true, ""purge_networks"": true, ""read_only"": false, ""recreate"": false, ""restart"": false, ""restart_policy"": ""always"", ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""started"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": [""/shared/config/plex:/config"", ""/shared/plex:/data""], ""volumes_from"": null}, ""module_name"": ""docker_container""}, ""msg"": ""Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'""} ``` Cheers, R. ",True,"docker_container module - ""Error connecting container to network"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /Users/ret/Projects/servers/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] inventory = ./inventory% ``` ##### OS / ENVIRONMENT OSX 10.10.5 (Yosemite) and OSX Sierra ##### SUMMARY When trying to create docker containers from an ansible playbook and adding them to a docker network previously created from another playbook to give this container a static IP address I'm getting an error saying: ""Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'"" ##### STEPS TO REPRODUCE Create network playbook: ``` - name: Create private network command: docker network create --subnet=192.168.100.0/24 --ip-range=192.168.100.0/24 --gateway=192.168.100.1 -o parent=eth0 privnet when: privnet is defined and dockernets.stdout.find(privnet.name) == -1 ``` Create docker container playbook: ``` --- - file: path=/shared/config/plex state=directory mode=0755 owner=797 recurse=true - name: plex in docker docker_container: name: ""plex"" hostname: ""box1plex"" image: timhaak/plex state: started restart_policy: always pull: true networks: - name: privnet ipv4_address: 192.168.100.10 purge_networks: yes log_driver: syslog log_opt: tag: ""plex"" volumes: - /shared/config/plex:/config - /shared/plex:/data ``` ##### EXPECTED RESULTS I expect a successful container creation instead of an error. ##### ACTUAL RESULTS ``` TASK [plex : plex in docker] *************************************************** task path: /Users/ret/Projects/servers/roles/plex/tasks/main.yml:3 ESTABLISH SSH CONNECTION FOR USER: ret SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `"" && echo ansible-tmp-1475741968.88-156551952368602=""` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `"" ) && sleep 0'""'""'' PUT /var/folders/7t/0myxzv9j64z0y6r2vl3wwg6m0000gn/T/tmpWiTHhW TO /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r '[box.mydomain.com]' ESTABLISH SSH CONNECTION FOR USER: ret SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '""'""'chmod u+x /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/ /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container && sleep 0'""'""'' ESTABLISH SSH CONNECTION FOR USER: ret SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r -tt box.mydomain.com '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-prfzskmhsxhsivtoucrzsgcloglldjbv; LANG=es_ES.UTF-8 LC_ALL=es_ES.UTF-8 LC_MESSAGES=es_ES.UTF-8 /usr/bin/python /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container; rm -rf ""/home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [box.mydomain.com]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_version"": null, ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": ""box1plex"", ""image"": ""timhaak/plex"", ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""syslog"", ""log_opt"": {""tag"": ""plex""}, ""log_options"": {""tag"": ""plex""}, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""plex"", ""network_mode"": null, ""networks"": [{""id"": ""ab1a4406681e5fef4eef6409c6819615912b4b3c6ac5e6d0161b744a96d981d1"", ""ipv4_address"": ""192.168.100.10"", ""name"": ""privnet""}], ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": true, ""purge_networks"": true, ""read_only"": false, ""recreate"": false, ""restart"": false, ""restart_policy"": ""always"", ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""started"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": [""/shared/config/plex:/config"", ""/shared/plex:/data""], ""volumes_from"": null}, ""module_name"": ""docker_container""}, ""msg"": ""Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'""} ``` Cheers, R. ",1,docker container module error connecting container to network issue type bug report component name docker container ansible version ansible config file users ret projects servers ansible cfg configured module search path default w o overrides configuration inventory inventory os environment osx yosemite and osx sierra summary when trying to create docker containers from an ansible playbook and adding them to a docker network previously created from another playbook to give this container a static ip address i m getting an error saying error connecting container to network privnet connect container to network got an unexpected keyword argument address steps to reproduce create network playbook name create private network command docker network create subnet ip range gateway o parent privnet when privnet is defined and dockernets stdout find privnet name create docker container playbook file path shared config plex state directory mode owner recurse true name plex in docker docker container name plex hostname image timhaak plex state started restart policy always pull true networks name privnet address purge networks yes log driver syslog log opt tag plex volumes shared config plex config shared plex data expected results i expect a successful container creation instead of an error actual results task task path users ret projects servers roles plex tasks main yml establish ssh connection for user ret ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r box mydomain com bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpwithhw to home ret ansible tmp ansible tmp docker container ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r establish ssh connection for user ret ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r box mydomain com bin sh c chmod u x home ret ansible tmp ansible tmp home ret ansible tmp ansible tmp docker container sleep establish ssh connection for user ret ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r tt box mydomain com bin sh c sudo h s n u root bin sh c echo become success prfzskmhsxhsivtoucrzsgcloglldjbv lang es es utf lc all es es utf lc messages es es utf usr bin python home ret ansible tmp ansible tmp docker container rm rf home ret ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api version null blkio weight null cacert path null capabilities null cert path null command null cpu period null cpu quota null cpu shares null cpuset cpus null cpuset mems null debug false detach true devices null dns opts null dns search domains null dns servers null docker host null entrypoint null env null env file null etc hosts null exposed ports null filter logger false force kill false groups null hostname image timhaak plex interactive false ipc mode null keep volumes true kernel memory null key path null kill signal null labels null links null log driver syslog log opt tag plex log options tag plex mac address null memory memory reservation null memory swap null memory swappiness null name plex network mode null networks oom killer null paused false pid mode null privileged false published ports null pull true purge networks true read only false recreate false restart false restart policy always restart retries security opts null shm size null ssl version null state started stop signal null stop timeout null timeout null tls null tls hostname null tls verify null trust image content false tty false ulimits null user null uts null volume driver null volumes volumes from null module name docker container msg error connecting container to network privnet connect container to network got an unexpected keyword argument address cheers r ,1 1753,6574969712.0,IssuesEvent,2017-09-11 14:38:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,iam update user fails to find existing user,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME iam core module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /Users/ME/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running on OS X Sierra (10.12) ##### SUMMARY I was using the `iam` core module, and ran into an interesting bug when attempting to update an existing IAM user. I first created an IAM user (whether this was done through Ansible or through AWS, the result was still the same). I then attempted to use the `iam` module to update this user, but received an error stating that `The user USER does not exist`. In one particular test, I used the update user functionality to check for the existence of the user, and then proceeded to create the user if it did not exist (a `FAILED` status = user does not exist, so go ahead and create it). Before this run, the user in question had already been created; as such, the first action should have succeeded, and the second action should have skipped. However, updating the user failed, spitting out the error supplied above, which made the playbook attempt to create the user, which failed because the user already existed! The aforementioned test was done as a workaround to, what I believe is, a gap in functionality of the `iam` module. Ansible is fantastically idempotent, but this module appears not to be. When creating a new IAM user, it would be extremely beneficial for the task to not error out if the user already exists, but rather skip the task and move on (like when you try to `yum` install a package that is already installed). ##### STEPS TO REPRODUCE I am running this all locally. The relevant snippets of the playbook are the following: ``` ... - name: Check for existence of user connection: local iam: iam_type: user name: USER state: update access_key_state: create register: user_creation ignore_errors: yes - name: Create user connection: local iam: iam_type: user name: USER state: present access_key_state: create register: user_creation when: user_creation.failed == true ... ``` ##### EXPECTED RESULTS I expected to successfully update the existing IAM user when using the `iam` module, with `state: update`. ##### ACTUAL RESULTS Here is the output from the test case described above. You will see two contradicting output messages! ``` ... TASK [Check for presence of user] ********************************************** fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""The user USER does not exist. No update made.""} ...ignoring TASK [Create user] ************************************************************* fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""BotoServerError: 409 Conflict\n\n \n Sender\n EntityAlreadyExists\n User with name USER already exists.\n \n 5f797c8d-948f-11e6-9002-172eb1b2f46a\n\n""} ... ``` ",True,"iam update user fails to find existing user - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME iam core module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /Users/ME/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running on OS X Sierra (10.12) ##### SUMMARY I was using the `iam` core module, and ran into an interesting bug when attempting to update an existing IAM user. I first created an IAM user (whether this was done through Ansible or through AWS, the result was still the same). I then attempted to use the `iam` module to update this user, but received an error stating that `The user USER does not exist`. In one particular test, I used the update user functionality to check for the existence of the user, and then proceeded to create the user if it did not exist (a `FAILED` status = user does not exist, so go ahead and create it). Before this run, the user in question had already been created; as such, the first action should have succeeded, and the second action should have skipped. However, updating the user failed, spitting out the error supplied above, which made the playbook attempt to create the user, which failed because the user already existed! The aforementioned test was done as a workaround to, what I believe is, a gap in functionality of the `iam` module. Ansible is fantastically idempotent, but this module appears not to be. When creating a new IAM user, it would be extremely beneficial for the task to not error out if the user already exists, but rather skip the task and move on (like when you try to `yum` install a package that is already installed). ##### STEPS TO REPRODUCE I am running this all locally. The relevant snippets of the playbook are the following: ``` ... - name: Check for existence of user connection: local iam: iam_type: user name: USER state: update access_key_state: create register: user_creation ignore_errors: yes - name: Create user connection: local iam: iam_type: user name: USER state: present access_key_state: create register: user_creation when: user_creation.failed == true ... ``` ##### EXPECTED RESULTS I expected to successfully update the existing IAM user when using the `iam` module, with `state: update`. ##### ACTUAL RESULTS Here is the output from the test case described above. You will see two contradicting output messages! ``` ... TASK [Check for presence of user] ********************************************** fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""The user USER does not exist. No update made.""} ...ignoring TASK [Create user] ************************************************************* fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""BotoServerError: 409 Conflict\n\n \n Sender\n EntityAlreadyExists\n User with name USER already exists.\n \n 5f797c8d-948f-11e6-9002-172eb1b2f46a\n\n""} ... ``` ",1,iam update user fails to find existing user issue type bug report component name iam core module ansible version ansible config file users me ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment running on os x sierra summary i was using the iam core module and ran into an interesting bug when attempting to update an existing iam user i first created an iam user whether this was done through ansible or through aws the result was still the same i then attempted to use the iam module to update this user but received an error stating that the user user does not exist in one particular test i used the update user functionality to check for the existence of the user and then proceeded to create the user if it did not exist a failed status user does not exist so go ahead and create it before this run the user in question had already been created as such the first action should have succeeded and the second action should have skipped however updating the user failed spitting out the error supplied above which made the playbook attempt to create the user which failed because the user already existed the aforementioned test was done as a workaround to what i believe is a gap in functionality of the iam module ansible is fantastically idempotent but this module appears not to be when creating a new iam user it would be extremely beneficial for the task to not error out if the user already exists but rather skip the task and move on like when you try to yum install a package that is already installed steps to reproduce i am running this all locally the relevant snippets of the playbook are the following name check for existence of user connection local iam iam type user name user state update access key state create register user creation ignore errors yes name create user connection local iam iam type user name user state present access key state create register user creation when user creation failed true expected results i expected to successfully update the existing iam user when using the iam module with state update actual results here is the output from the test case described above you will see two contradicting output messages task fatal failed changed false failed true msg the user user does not exist no update made ignoring task fatal failed changed false failed true msg botoservererror conflict n n sender n entityalreadyexists n user with name user already exists n n n n ,1 1906,6577561924.0,IssuesEvent,2017-09-12 01:46:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Opensuse version <= 11 wrong hostname strategy,affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: hostname module ##### Ansible Version: ``` $ ansible --version ansible 2.0.1.0 config file = /Users/brookerj/repos/bbot/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: $ cat /Users/brookerj/repos/bbot/ansible.cfg [defaults] deprecation_warnings = False ##### Environment: running from Mac, managing an Opensuse 11.1 32 bit instance ##### Summary: The core modules hostname setting function: https://github.com/ansible/ansible-modules-core/blob/09e2457eb0e811ac293065dd77cd31597ceb2da7/system/hostname.py#L467-L472 The code referenced above incorrectly assumes that all versions of opensuse run systemd. in fact only versions 12 or later do. with the previous versions using the old sysinit v. see here for details of the change: https://www.suse.com/docrep/documents/huz0a6bf9a/systemd_in_suse_linux_enterprise_12_white_paper.pdf ##### Steps To Reproduce: try to set the hostname on an opensuse host. ##### Expected Results: Hostname to be set properly ##### Actual Results: ``` fatal: [Opensuse_11_32]: FAILED! => {""changed"": false, ""cmd"": ""hostnamectl --transient set-hostname Opensles_11_32"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} ``` ",True,"Opensuse version <= 11 wrong hostname strategy - ##### Issue Type: - Bug Report ##### Plugin Name: hostname module ##### Ansible Version: ``` $ ansible --version ansible 2.0.1.0 config file = /Users/brookerj/repos/bbot/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: $ cat /Users/brookerj/repos/bbot/ansible.cfg [defaults] deprecation_warnings = False ##### Environment: running from Mac, managing an Opensuse 11.1 32 bit instance ##### Summary: The core modules hostname setting function: https://github.com/ansible/ansible-modules-core/blob/09e2457eb0e811ac293065dd77cd31597ceb2da7/system/hostname.py#L467-L472 The code referenced above incorrectly assumes that all versions of opensuse run systemd. in fact only versions 12 or later do. with the previous versions using the old sysinit v. see here for details of the change: https://www.suse.com/docrep/documents/huz0a6bf9a/systemd_in_suse_linux_enterprise_12_white_paper.pdf ##### Steps To Reproduce: try to set the hostname on an opensuse host. ##### Expected Results: Hostname to be set properly ##### Actual Results: ``` fatal: [Opensuse_11_32]: FAILED! => {""changed"": false, ""cmd"": ""hostnamectl --transient set-hostname Opensles_11_32"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2} ``` ",1,opensuse version wrong hostname strategy issue type bug report plugin name hostname module ansible version ansible version ansible config file users brookerj repos bbot ansible cfg configured module search path default w o overrides ansible configuration cat users brookerj repos bbot ansible cfg deprecation warnings false environment running from mac managing an opensuse bit instance summary the core modules hostname setting function the code referenced above incorrectly assumes that all versions of opensuse run systemd in fact only versions or later do with the previous versions using the old sysinit v see here for details of the change steps to reproduce try to set the hostname on an opensuse host expected results hostname to be set properly actual results fatal failed changed false cmd hostnamectl transient set hostname opensles failed true msg no such file or directory rc ,1 1471,6377387036.0,IssuesEvent,2017-08-02 09:54:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Yum module reports OK despite package not installed,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Ansible core module: yum ##### ANSIBLE VERSION ``` vagrant@localhost aon_ansible]$ ansible-playbook --version ansible-playbook 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Have not made any changes, ansible installed via fedora 24 package manager as ""dnf ansible"" ##### OS / ENVIRONMENT Fedora 24 Workstation [vagrant@localhost aon_ansible]$ uname -a Linux localhost.localdomain 4.5.5-300.fc24.x86_64 #1 SMP Thu May 19 13:05:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY Using yum module to install several packagges with_items, I get TASK [Install Docker Engine] *************************************************** ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) despite docker package is not available PLAY RECAP ********************************************************************* 10.115.141.232 : ok=1 changed=0 unreachable=0 failed=1 ##### STEPS TO REPRODUCE Example play: tasks: - name: Install Docker Engine yum: name={{item}} state=installed with_items: - device-mapper-libs - device-mapper-event-libs - docker Running playbook [vagrant@localhost aon_ansible]$ ansible-playbook -i inventories/test playbooks/docker-provision.yml PLAY [ams_docker] ************************************************************** TASK [setup] ******************************************************************* ok: [10.115.141.232] ok: [10.115.141.233] TASK [Install Docker Engine] *************************************************** ok: [10.115.141.232] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/Source/aon_ansible/playbooks/docker-provision.retry PLAY RECAP ********************************************************************* 10.115.141.232 : ok=1 changed=0 unreachable=0 failed=1 10.115.141.233 : ok=1 changed=0 unreachable=0 failed=1 Running with verbose: [vagrant@localhost aon_ansible]$ ansible-playbook -v -i inventories/test playbooks/docker-provision.yml Using /etc/ansible/ansible.cfg as config file PLAY [ams_docker] ************************************************************** TASK [setup] ******************************************************************* ok: [10.115.141.233] ok: [10.115.141.232] TASK [Install Docker Engine] *************************************************** ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} ok: [10.115.141.232] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/Source/aon_ansible/playbooks/docker-provision.retry PLAY RECAP ********************************************************************* 10.115.141.232 : ok=1 changed=0 unreachable=0 failed=1 10.115.141.233 : ok=1 changed=0 unreachable=0 failed=1 ##### EXPECTED RESULTS Summary would not show ""OK"" on task level and show error message ##### ACTUAL RESULTS Task shows OK ``` TASK [Install Docker Engine] *************************************************** ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} ok: [10.115.141.232] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} ``` Note that inner message has {""changed"": false, ""failed"": true, ""msg"": ""No Package matching 'docker' found available, installed or updated"") ",True,"Yum module reports OK despite package not installed - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Ansible core module: yum ##### ANSIBLE VERSION ``` vagrant@localhost aon_ansible]$ ansible-playbook --version ansible-playbook 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Have not made any changes, ansible installed via fedora 24 package manager as ""dnf ansible"" ##### OS / ENVIRONMENT Fedora 24 Workstation [vagrant@localhost aon_ansible]$ uname -a Linux localhost.localdomain 4.5.5-300.fc24.x86_64 #1 SMP Thu May 19 13:05:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY Using yum module to install several packagges with_items, I get TASK [Install Docker Engine] *************************************************** ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) despite docker package is not available PLAY RECAP ********************************************************************* 10.115.141.232 : ok=1 changed=0 unreachable=0 failed=1 ##### STEPS TO REPRODUCE Example play: tasks: - name: Install Docker Engine yum: name={{item}} state=installed with_items: - device-mapper-libs - device-mapper-event-libs - docker Running playbook [vagrant@localhost aon_ansible]$ ansible-playbook -i inventories/test playbooks/docker-provision.yml PLAY [ams_docker] ************************************************************** TASK [setup] ******************************************************************* ok: [10.115.141.232] ok: [10.115.141.233] TASK [Install Docker Engine] *************************************************** ok: [10.115.141.232] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/Source/aon_ansible/playbooks/docker-provision.retry PLAY RECAP ********************************************************************* 10.115.141.232 : ok=1 changed=0 unreachable=0 failed=1 10.115.141.233 : ok=1 changed=0 unreachable=0 failed=1 Running with verbose: [vagrant@localhost aon_ansible]$ ansible-playbook -v -i inventories/test playbooks/docker-provision.yml Using /etc/ansible/ansible.cfg as config file PLAY [ams_docker] ************************************************************** TASK [setup] ******************************************************************* ok: [10.115.141.233] ok: [10.115.141.232] TASK [Install Docker Engine] *************************************************** ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} ok: [10.115.141.232] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/Source/aon_ansible/playbooks/docker-provision.retry PLAY RECAP ********************************************************************* 10.115.141.232 : ok=1 changed=0 unreachable=0 failed=1 10.115.141.233 : ok=1 changed=0 unreachable=0 failed=1 ##### EXPECTED RESULTS Summary would not show ""OK"" on task level and show error message ##### ACTUAL RESULTS Task shows OK ``` TASK [Install Docker Engine] *************************************************** ok: [10.115.141.233] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} ok: [10.115.141.232] => (item=[u'device-mapper-libs', u'device-mapper-event-libs', u'docker']) => {""changed"": false, ""failed"": true, ""item"": [""device-mapper-libs"", ""device-mapper-event-libs"", ""docker""], ""msg"": ""No Package matching 'docker' found available, installed or updated"", ""rc"": 0, ""results"": [""device-mapper-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-libs is already installed"", ""device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64 providing device-mapper-event-libs is already installed""]} ``` Note that inner message has {""changed"": false, ""failed"": true, ""msg"": ""No Package matching 'docker' found available, installed or updated"") ",1,yum module reports ok despite package not installed issue type bug report component name ansible core module yum ansible version vagrant localhost aon ansible ansible playbook version ansible playbook config file etc ansible ansible cfg configured module search path default w o overrides configuration have not made any changes ansible installed via fedora package manager as dnf ansible os environment fedora workstation uname a linux localhost localdomain smp thu may utc gnu linux summary using yum module to install several packagges with items i get task ok item despite docker package is not available play recap ok changed unreachable failed steps to reproduce example play tasks name install docker engine yum name item state installed with items device mapper libs device mapper event libs docker running playbook ansible playbook i inventories test playbooks docker provision yml play task ok ok task ok item ok item no more hosts left to retry use limit home vagrant source aon ansible playbooks docker provision retry play recap ok changed unreachable failed ok changed unreachable failed running with verbose ansible playbook v i inventories test playbooks docker provision yml using etc ansible ansible cfg as config file play task ok ok task ok item changed false failed true item msg no package matching docker found available installed or updated rc results ok item changed false failed true item msg no package matching docker found available installed or updated rc results no more hosts left to retry use limit home vagrant source aon ansible playbooks docker provision retry play recap ok changed unreachable failed ok changed unreachable failed expected results summary would not show ok on task level and show error message actual results task shows ok task ok item changed false failed true item msg no package matching docker found available installed or updated rc results ok item changed false failed true item msg no package matching docker found available installed or updated rc results note that inner message has changed false failed true msg no package matching docker found available installed or updated ,1 1823,6577330146.0,IssuesEvent,2017-09-12 00:09:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_template missing backup,affects_2.1 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME networking/ios_template ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY using ios_template with backup: true, I intermittently only get half the switch config. This also seems to affect the --check and --diff features, as the module tries to insert missing parts when they are in fact already present on the switch. I have only seen this behaviour with a large stack of 8 Cisco 3850s, running IOS XE. I have two smaller stacks (of 5 and 2 using the same hardware/software) that don't seem to exhibit this problem. I wonder if some timeout or read buffer is being hit? The problem seems to be intermittent. ##### STEPS TO REPRODUCE Add templated config to an interface numbered Gi3/0/9 or higher, of a stack where the last interface is Gi8/0/48. ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"ios_template missing backup - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME networking/ios_template ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY using ios_template with backup: true, I intermittently only get half the switch config. This also seems to affect the --check and --diff features, as the module tries to insert missing parts when they are in fact already present on the switch. I have only seen this behaviour with a large stack of 8 Cisco 3850s, running IOS XE. I have two smaller stacks (of 5 and 2 using the same hardware/software) that don't seem to exhibit this problem. I wonder if some timeout or read buffer is being hit? The problem seems to be intermittent. ##### STEPS TO REPRODUCE Add templated config to an interface numbered Gi3/0/9 or higher, of a stack where the last interface is Gi8/0/48. ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,ios template missing backup issue type bug report component name networking ios template ansible version ansible config file home admin ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary using ios template with backup true i intermittently only get half the switch config this also seems to affect the check and diff features as the module tries to insert missing parts when they are in fact already present on the switch i have only seen this behaviour with a large stack of cisco running ios xe i have two smaller stacks of and using the same hardware software that don t seem to exhibit this problem i wonder if some timeout or read buffer is being hit the problem seems to be intermittent steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used add templated config to an interface numbered or higher of a stack where the last interface is expected results actual results ,1 1359,5870460486.0,IssuesEvent,2017-05-15 04:49:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,django-admin.py using manage.pyc instead of manage.py,affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE Feature Idea ##### COMPONENT NAME django_manage module ##### ANSIBLE VERSION N/A ##### SUMMARY I'm never sync .py source code to production, should django-admin.py uses manage.pyc instead of manage.py? https://github.com/ansible/ansible-modules-core/blob/devel/web_infrastructure/django_manage.py#L239 ",True,"django-admin.py using manage.pyc instead of manage.py - ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME django_manage module ##### ANSIBLE VERSION N/A ##### SUMMARY I'm never sync .py source code to production, should django-admin.py uses manage.pyc instead of manage.py? https://github.com/ansible/ansible-modules-core/blob/devel/web_infrastructure/django_manage.py#L239 ",1,django admin py using manage pyc instead of manage py issue type feature idea component name django manage module ansible version n a summary i m never sync py source code to production should django admin py uses manage pyc instead of manage py ,1 1905,6577561632.0,IssuesEvent,2017-09-12 01:46:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,S3 module doesn't use AWS Signature Version 4,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### Ansible Configuration: NONE ##### Environment: - CentOS 7 - Ubuntu 14.04 ##### Summary: Using Ansible's S3 module I am not able to copy server-side encrypted files (Server Side Encryption with AWS KMS managed keys) from bucket to local directory although all needed settings are set and AWSCLI works well. EC2 instance has IAM role with permissions to use the appropriate KMS key and access the bucket. I don't set variables access_key and access_secret_key explicitly. I am able to get non-encrypted files from the same bucket using Ansible's S3 module. ##### Steps To Reproduce: CentOS7: - yum install epel-release - yum install python-pip - yum install --enablerepo epel-testing ansible - pip install --upgrade awscli - aws configure set s3.signature_version s3v4 - ansible -vvv localhost -c local -m s3 -a 'region=eu-west-1 mode=get bucket=my-bucket object=/id_rsa dest=/root/.ssh/id_rsa' But this works as expected: - aws s3 cp 's3://my-bucket/id_rsa' /root/.ssh/id_rsa --region eu-west-1 ##### Expected Results: File id_rsa occurs in the /root/.ssh/ directory. ##### Actual Results: Using /etc/ansible/ansible.cfg as config file ESTABLISH LOCAL CONNECTION FOR USER: root 127.0.0.1 EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )"" ) 127.0.0.1 PUT /tmp/tmppZN1Re TO /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3 127.0.0.1 EXEC LANG=en_US.utf8 LC_ALL=en_US.utf8 LC_MESSAGES=en_US.utf8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3; rm -rf ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/"" > /dev/null 2>&1 An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3"", line 2823, in main() File ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3"", line 496, in main download_s3file(module, s3, bucket, obj, dest, retries, version=version) File ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3"", line 323, in download_s3file key.get_contents_to_filename(dest) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1665, in get_contents_to_filename response_headers=response_headers) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1603, in get_contents_to_file response_headers=response_headers) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1435, in get_file query_args=None) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1467, in _get_file_internal override_num_retries=override_num_retries) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 325, in open override_num_retries=override_num_retries) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 273, in open_read self.resp.reason, body) boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request InvalidArgumentRequests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.Authorizationnull...... localhost | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""s3"" }, ""parsed"": false } ",True,"S3 module doesn't use AWS Signature Version 4 - ##### Issue Type: - Bug Report ##### Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### Ansible Configuration: NONE ##### Environment: - CentOS 7 - Ubuntu 14.04 ##### Summary: Using Ansible's S3 module I am not able to copy server-side encrypted files (Server Side Encryption with AWS KMS managed keys) from bucket to local directory although all needed settings are set and AWSCLI works well. EC2 instance has IAM role with permissions to use the appropriate KMS key and access the bucket. I don't set variables access_key and access_secret_key explicitly. I am able to get non-encrypted files from the same bucket using Ansible's S3 module. ##### Steps To Reproduce: CentOS7: - yum install epel-release - yum install python-pip - yum install --enablerepo epel-testing ansible - pip install --upgrade awscli - aws configure set s3.signature_version s3v4 - ansible -vvv localhost -c local -m s3 -a 'region=eu-west-1 mode=get bucket=my-bucket object=/id_rsa dest=/root/.ssh/id_rsa' But this works as expected: - aws s3 cp 's3://my-bucket/id_rsa' /root/.ssh/id_rsa --region eu-west-1 ##### Expected Results: File id_rsa occurs in the /root/.ssh/ directory. ##### Actual Results: Using /etc/ansible/ansible.cfg as config file ESTABLISH LOCAL CONNECTION FOR USER: root 127.0.0.1 EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )"" ) 127.0.0.1 PUT /tmp/tmppZN1Re TO /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3 127.0.0.1 EXEC LANG=en_US.utf8 LC_ALL=en_US.utf8 LC_MESSAGES=en_US.utf8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3; rm -rf ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/"" > /dev/null 2>&1 An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3"", line 2823, in main() File ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3"", line 496, in main download_s3file(module, s3, bucket, obj, dest, retries, version=version) File ""/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3"", line 323, in download_s3file key.get_contents_to_filename(dest) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1665, in get_contents_to_filename response_headers=response_headers) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1603, in get_contents_to_file response_headers=response_headers) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1435, in get_file query_args=None) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 1467, in _get_file_internal override_num_retries=override_num_retries) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 325, in open override_num_retries=override_num_retries) File ""/usr/lib/python2.7/site-packages/boto/s3/key.py"", line 273, in open_read self.resp.reason, body) boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request InvalidArgumentRequests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.Authorizationnull...... localhost | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""s3"" }, ""parsed"": false } ",1, module doesn t use aws signature version issue type bug report ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides ansible configuration none environment centos ubuntu summary using ansible s module i am not able to copy server side encrypted files server side encryption with aws kms managed keys from bucket to local directory although all needed settings are set and awscli works well instance has iam role with permissions to use the appropriate kms key and access the bucket i don t set variables access key and access secret key explicitly i am able to get non encrypted files from the same bucket using ansible s module steps to reproduce yum install epel release yum install python pip yum install enablerepo epel testing ansible pip install upgrade awscli aws configure set signature version ansible vvv localhost c local m a region eu west mode get bucket my bucket object id rsa dest root ssh id rsa but this works as expected aws cp my bucket id rsa root ssh id rsa region eu west expected results file id rsa occurs in the root ssh directory actual results using etc ansible ansible cfg as config file establish local connection for user root exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp to root ansible tmp ansible tmp exec lang en us lc all en us lc messages en us usr bin python root ansible tmp ansible tmp rm rf root ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file root ansible tmp ansible tmp line in main file root ansible tmp ansible tmp line in main download module bucket obj dest retries version version file root ansible tmp ansible tmp line in download key get contents to filename dest file usr lib site packages boto key py line in get contents to filename response headers response headers file usr lib site packages boto key py line in get contents to file response headers response headers file usr lib site packages boto key py line in get file query args none file usr lib site packages boto key py line in get file internal override num retries override num retries file usr lib site packages boto key py line in open override num retries override num retries file usr lib site packages boto key py line in open read self resp reason body boto exception bad request invalidargument requests specifying server side encryption with aws kms managed keys require aws signature version authorization null localhost failed changed false failed true invocation module name parsed false ,1 992,4756814279.0,IssuesEvent,2016-10-24 14:58:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,hostname: support alpine(3.4),affects_2.2 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME hostname ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel e326da28ff) last updated 2016/09/13 13:46:08 (GMT -300) lib/ansible/modules/core: (detached HEAD ae6992bf8c) last updated 2016/09/13 13:47:28 (GMT -300) lib/ansible/modules/extras: (detached HEAD afd0b23836) last updated 2016/09/13 13:47:28 (GMT -300) config file = /Users/jbergstroem/Work/node/build/ansible/ansible.cfg configured module search path = ['plugins/library'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Ansible currently returns this if you try to use the hostname module on alpine: `fatal: [test-joyent-alpine34-x64-2]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""hostname module cannot be used on platform Linux (Alpine)""}` Supporting it should be relatively straightforward seeing how it utilizes both `/etc/hostname` and `hostname` if busybox is installed. I can see the rationale for not supporting a lot of stuff on alpine but I think people that use it to a point where even busybox isn't installed wouldn't try and call hostname either. ##### STEPS TO REPRODUCE Just call hostname ##### EXPECTED RESULTS Setting the hostname ##### ACTUAL RESULTS ``` fatal: [test-joyent-alpine34-x64-2]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""hostname module cannot be used on platform Linux (Alpine)""} ``` ",True,"hostname: support alpine(3.4) - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME hostname ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel e326da28ff) last updated 2016/09/13 13:46:08 (GMT -300) lib/ansible/modules/core: (detached HEAD ae6992bf8c) last updated 2016/09/13 13:47:28 (GMT -300) lib/ansible/modules/extras: (detached HEAD afd0b23836) last updated 2016/09/13 13:47:28 (GMT -300) config file = /Users/jbergstroem/Work/node/build/ansible/ansible.cfg configured module search path = ['plugins/library'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Ansible currently returns this if you try to use the hostname module on alpine: `fatal: [test-joyent-alpine34-x64-2]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""hostname module cannot be used on platform Linux (Alpine)""}` Supporting it should be relatively straightforward seeing how it utilizes both `/etc/hostname` and `hostname` if busybox is installed. I can see the rationale for not supporting a lot of stuff on alpine but I think people that use it to a point where even busybox isn't installed wouldn't try and call hostname either. ##### STEPS TO REPRODUCE Just call hostname ##### EXPECTED RESULTS Setting the hostname ##### ACTUAL RESULTS ``` fatal: [test-joyent-alpine34-x64-2]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""hostname module cannot be used on platform Linux (Alpine)""} ``` ",1,hostname support alpine issue type feature idea component name hostname ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file users jbergstroem work node build ansible ansible cfg configured module search path configuration os environment n a summary ansible currently returns this if you try to use the hostname module on alpine fatal failed changed false failed true msg hostname module cannot be used on platform linux alpine supporting it should be relatively straightforward seeing how it utilizes both etc hostname and hostname if busybox is installed i can see the rationale for not supporting a lot of stuff on alpine but i think people that use it to a point where even busybox isn t installed wouldn t try and call hostname either steps to reproduce just call hostname expected results setting the hostname actual results fatal failed changed false failed true msg hostname module cannot be used on platform linux alpine ,1 897,4559605657.0,IssuesEvent,2016-09-14 03:16:44,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 module - root volume size customization,affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY I'm working on a playbook to launch a windows ec2 instance with a custom root volume size. It appears that my play is completely overlooking the 'volumes' attribute when creating the instance though. I'm not getting any errors.. but the instance is created with the default 30GB root volume size. ##### STEPS TO REPRODUCE Below is an example playbook as well as an example role that we have created. Using this configuration it appears that the volumes info is not being taken into consideration when creating an instance. Playbook: ``` --- - include: aws.yml - name: create server in availbility zone hosts: control gather_facts: yes vars_files: - group_vars/secret.yml vars: - app_name: ""TEST"" - app_updates: ""Yes"" - app_persistence: ""Yes"" - instance_type: m3.medium - instance_count: 1 - term_protect: yes - vpc_subnet_id: my-subnet-id - instance_zone: my-az-info roles: - win_launch ``` win_launch role: ``` - name: Launch new instance ec2: region: ""{{ region }}"" keypair: ""{{ keypair }}"" zone: ""{{ instance_zone }}"" group_id: [ ""{{ sg_out.group_id }}"" ] image: ""{{ win_ami_id }}"" instance_type: ""{{ instance_type }}"" assign_public_ip: no termination_protection: ""{{ term_protect }}"" vpc_subnet_id: ""{{ vpc_subnet_id }}"" wait: yes exact_count: ""{{ instance_count }}"" count_tag: Name: ""{{ app_name }}"" instance_tags: Name: ""{{ app_name }}"" Updates: ""{{ app_updates }}"" Persistent: ""{{ app_persistence }}"" user_data: ""{{ lookup('template','userdata.txt.j2') }}"" volumes: - device_name: /dev/xvda volume_size: 80 volume_type: gp2 register: ec2 ``` ##### EXPECTED RESULTS Expect a windows root drive to be created with the size of my picking. ##### ACTUAL RESULTS ec2 instance is created with the default root volume size. ",True,"ec2 module - root volume size customization - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY I'm working on a playbook to launch a windows ec2 instance with a custom root volume size. It appears that my play is completely overlooking the 'volumes' attribute when creating the instance though. I'm not getting any errors.. but the instance is created with the default 30GB root volume size. ##### STEPS TO REPRODUCE Below is an example playbook as well as an example role that we have created. Using this configuration it appears that the volumes info is not being taken into consideration when creating an instance. Playbook: ``` --- - include: aws.yml - name: create server in availbility zone hosts: control gather_facts: yes vars_files: - group_vars/secret.yml vars: - app_name: ""TEST"" - app_updates: ""Yes"" - app_persistence: ""Yes"" - instance_type: m3.medium - instance_count: 1 - term_protect: yes - vpc_subnet_id: my-subnet-id - instance_zone: my-az-info roles: - win_launch ``` win_launch role: ``` - name: Launch new instance ec2: region: ""{{ region }}"" keypair: ""{{ keypair }}"" zone: ""{{ instance_zone }}"" group_id: [ ""{{ sg_out.group_id }}"" ] image: ""{{ win_ami_id }}"" instance_type: ""{{ instance_type }}"" assign_public_ip: no termination_protection: ""{{ term_protect }}"" vpc_subnet_id: ""{{ vpc_subnet_id }}"" wait: yes exact_count: ""{{ instance_count }}"" count_tag: Name: ""{{ app_name }}"" instance_tags: Name: ""{{ app_name }}"" Updates: ""{{ app_updates }}"" Persistent: ""{{ app_persistence }}"" user_data: ""{{ lookup('template','userdata.txt.j2') }}"" volumes: - device_name: /dev/xvda volume_size: 80 volume_type: gp2 register: ec2 ``` ##### EXPECTED RESULTS Expect a windows root drive to be created with the size of my picking. ##### ACTUAL RESULTS ec2 instance is created with the default root volume size. ",1, module root volume size customization issue type bug report component name module ansible version ansible os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary i m working on a playbook to launch a windows instance with a custom root volume size it appears that my play is completely overlooking the volumes attribute when creating the instance though i m not getting any errors but the instance is created with the default root volume size steps to reproduce below is an example playbook as well as an example role that we have created using this configuration it appears that the volumes info is not being taken into consideration when creating an instance playbook include aws yml name create server in availbility zone hosts control gather facts yes vars files group vars secret yml vars app name test app updates yes app persistence yes instance type medium instance count term protect yes vpc subnet id my subnet id instance zone my az info roles win launch win launch role name launch new instance region region keypair keypair zone instance zone group id image win ami id instance type instance type assign public ip no termination protection term protect vpc subnet id vpc subnet id wait yes exact count instance count count tag name app name instance tags name app name updates app updates persistent app persistence user data lookup template userdata txt volumes device name dev xvda volume size volume type register expected results expect a windows root drive to be created with the size of my picking actual results instance is created with the default root volume size ,1 1062,4877234082.0,IssuesEvent,2016-11-16 15:14:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,service task: unsupported parameter for module: runlevel against Ubutun 16 LTS,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT MacOS X 10.11.6 (15G1108) ##### SUMMARY We using a simple task service with runlevel against a Ubuntu 16.04.1 LTS (using systemd), I got the following error message: ""unsupported parameter for module: runlevel"" ##### STEPS TO REPRODUCE The task is like this: ````service: name=my_service_name runlevel=99 enabled=yes state=started```` and the sysV init script file exists on `/etc/init.d/my_service_name` ##### EXPECTED RESULTS A failure about usage of runlevel option. I know systemd module now exists but it was working just fine on Ansible 2.1.3.0 and I didn't read anything about deprecating this feature. ##### ACTUAL RESULTS ```` fatal: [machine]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""unsupported parameter for module: runlevel""} ```` ",True,"service task: unsupported parameter for module: runlevel against Ubutun 16 LTS - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT MacOS X 10.11.6 (15G1108) ##### SUMMARY We using a simple task service with runlevel against a Ubuntu 16.04.1 LTS (using systemd), I got the following error message: ""unsupported parameter for module: runlevel"" ##### STEPS TO REPRODUCE The task is like this: ````service: name=my_service_name runlevel=99 enabled=yes state=started```` and the sysV init script file exists on `/etc/init.d/my_service_name` ##### EXPECTED RESULTS A failure about usage of runlevel option. I know systemd module now exists but it was working just fine on Ansible 2.1.3.0 and I didn't read anything about deprecating this feature. ##### ACTUAL RESULTS ```` fatal: [machine]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""unsupported parameter for module: runlevel""} ```` ",1,service task unsupported parameter for module runlevel against ubutun lts issue type bug report component name service ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific macos x summary we using a simple task service with runlevel against a ubuntu lts using systemd i got the following error message unsupported parameter for module runlevel steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the task is like this service name my service name runlevel enabled yes state started and the sysv init script file exists on etc init d my service name expected results a failure about usage of runlevel option i know systemd module now exists but it was working just fine on ansible and i didn t read anything about deprecating this feature actual results fatal failed changed false failed true msg unsupported parameter for module runlevel ,1 1290,5467343237.0,IssuesEvent,2017-03-10 00:49:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_group: add tags,affects_1.8 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: Feature Idea ##### Ansible Version: ansible 1.8 ##### Environment: N/A ##### Summary: Please add the ability to create and modify tags associated with the security group. At least being able to set the Name tag would be helpful. ##### Steps To Reproduce: It would be nice if the feature was implemented like the instance_tags parameter in the ec2 module. ##### Expected Results: The ability to set tags for security groups. ##### Actual Results: N/A ",True,"ec2_group: add tags - ##### Issue Type: Feature Idea ##### Ansible Version: ansible 1.8 ##### Environment: N/A ##### Summary: Please add the ability to create and modify tags associated with the security group. At least being able to set the Name tag would be helpful. ##### Steps To Reproduce: It would be nice if the feature was implemented like the instance_tags parameter in the ec2 module. ##### Expected Results: The ability to set tags for security groups. ##### Actual Results: N/A ",1, group add tags issue type feature idea ansible version ansible environment n a summary please add the ability to create and modify tags associated with the security group at least being able to set the name tag would be helpful steps to reproduce it would be nice if the feature was implemented like the instance tags parameter in the module expected results the ability to set tags for security groups actual results n a ,1 910,4581543090.0,IssuesEvent,2016-09-19 06:15:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_config does not allow looping under parents,affects_2.1 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ansible 2.1.1.0 config file = /home/ansible/ios_config/ansible.cfg configured module search path = Default w/o overrides ansible@unl01:~/ios_config$ ##### CONFIGURATION default ansible.cfg ##### OS / ENVIRONMENT Ubuntu 14.04 LTS ##### SUMMARY I am unable to use looping (with_items) with the parents option of ios_config. ##### STEPS TO REPRODUCE ``` - name: no shut ios_config: provider: ""{{ provider }}"" lines: - no shutdown parents: ""interface {{ item.interface }}"" with_items: - { interface : Ethernet0/1 } - { interface : Ethernet0/2 } - { interface : Ethernet0/3 } ``` OR: ``` - name: no shut ios_config: provider: ""{{ provider }}"" lines: - no shutdown parents: ""interface {{ item.interface }}"" with_items: - ""{{ ip_addresses }}"" (in host inventory) ip_addresses: - interface: Loopback0 description: Router-ID ip_address: 1.1.1.1 ip_mask: 255.255.255.255 - interface: Ethernet0/1 description: ""To-ISP1"" ip_address: 172.31.1.2 ip_mask: 255.255.255.252 - interface: Ethernet0/2 description: ""To-SW1"" ip_address: 10.1.1.2 ip_mask: 255.255.255.0 standby_grp: 1 standby_ip: 10.1.1.1 standby_pri: 200 - interface: Ethernet0/3 description: ""To-WAN2"" ip_address: 10.1.254.253 ip_mask: 255.255.255.252 ``` ---------------- OUTPUT----------------- ________________ < TASK [no shut] > ---------------- ``` task path: /home/ansible/ios_config/ios_config_play.yml:38 fatal: [wan1.cisco.com]: FAILED! => {""failed"": true, ""msg"": ""the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined\n\nThe error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: no shut\n ^ here\n""} msg: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined The error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - name: no shut ^ here ``` ##### EXPECTED RESULTS the no shut command is run under interface X, interface Y, interface Z as defined in the with_items list ##### ACTUAL RESULTS With this debug, I can see that the with_items should work ``` - name: test debug: msg=""{{ item.interface }}"" with_items: - ""{{ ip_addresses }}"" ``` ``` _____________ < TASK [test] > ------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || task path: /home/ansible/ios_config/ios_config_play.yml:33 ok: [wan1.cisco.com] => (item={u'interface': u'Loopback0', u'ip_mask': u'255.255.255.255', u'ip_address': u'1.1.1.1', u'description': u'Router-ID'}) => { ""invocation"": { ""module_args"": { ""msg"": ""Loopback0"" }, ""module_name"": ""debug"" }, ""item"": { ""description"": ""Router-ID"", ""interface"": ""Loopback0"", ""ip_address"": ""1.1.1.1"", ""ip_mask"": ""255.255.255.255"" }, ""msg"": ""Loopback0"" } ok: [wan1.cisco.com] => (item={u'interface': u'Ethernet0/1', u'ip_mask': u'255.255.255.252', u'ip_address': u'172.31.1.2', u'description': u'To-ISP1'}) => { ""invocation"": { ""module_args"": { ""msg"": ""Ethernet0/1"" }, ""module_name"": ""debug"" }, ""item"": { ""description"": ""To-ISP1"", ""interface"": ""Ethernet0/1"", ""ip_address"": ""172.31.1.2"", ""ip_mask"": ""255.255.255.252"" }, ""msg"": ""Ethernet0/1"" ``` etc. This is an extremely standard networking use case (i.e. making selective changes under defined interface parent sections). If I've messed up anything basic then I apologise in advance, but I can't see how the debug works but the real thing fails. I've even gone back to basics and manually defined the dictionary instead of calling it ",True,"ios_config does not allow looping under parents - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ansible 2.1.1.0 config file = /home/ansible/ios_config/ansible.cfg configured module search path = Default w/o overrides ansible@unl01:~/ios_config$ ##### CONFIGURATION default ansible.cfg ##### OS / ENVIRONMENT Ubuntu 14.04 LTS ##### SUMMARY I am unable to use looping (with_items) with the parents option of ios_config. ##### STEPS TO REPRODUCE ``` - name: no shut ios_config: provider: ""{{ provider }}"" lines: - no shutdown parents: ""interface {{ item.interface }}"" with_items: - { interface : Ethernet0/1 } - { interface : Ethernet0/2 } - { interface : Ethernet0/3 } ``` OR: ``` - name: no shut ios_config: provider: ""{{ provider }}"" lines: - no shutdown parents: ""interface {{ item.interface }}"" with_items: - ""{{ ip_addresses }}"" (in host inventory) ip_addresses: - interface: Loopback0 description: Router-ID ip_address: 1.1.1.1 ip_mask: 255.255.255.255 - interface: Ethernet0/1 description: ""To-ISP1"" ip_address: 172.31.1.2 ip_mask: 255.255.255.252 - interface: Ethernet0/2 description: ""To-SW1"" ip_address: 10.1.1.2 ip_mask: 255.255.255.0 standby_grp: 1 standby_ip: 10.1.1.1 standby_pri: 200 - interface: Ethernet0/3 description: ""To-WAN2"" ip_address: 10.1.254.253 ip_mask: 255.255.255.252 ``` ---------------- OUTPUT----------------- ________________ < TASK [no shut] > ---------------- ``` task path: /home/ansible/ios_config/ios_config_play.yml:38 fatal: [wan1.cisco.com]: FAILED! => {""failed"": true, ""msg"": ""the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined\n\nThe error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: no shut\n ^ here\n""} msg: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined The error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - name: no shut ^ here ``` ##### EXPECTED RESULTS the no shut command is run under interface X, interface Y, interface Z as defined in the with_items list ##### ACTUAL RESULTS With this debug, I can see that the with_items should work ``` - name: test debug: msg=""{{ item.interface }}"" with_items: - ""{{ ip_addresses }}"" ``` ``` _____________ < TASK [test] > ------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || task path: /home/ansible/ios_config/ios_config_play.yml:33 ok: [wan1.cisco.com] => (item={u'interface': u'Loopback0', u'ip_mask': u'255.255.255.255', u'ip_address': u'1.1.1.1', u'description': u'Router-ID'}) => { ""invocation"": { ""module_args"": { ""msg"": ""Loopback0"" }, ""module_name"": ""debug"" }, ""item"": { ""description"": ""Router-ID"", ""interface"": ""Loopback0"", ""ip_address"": ""1.1.1.1"", ""ip_mask"": ""255.255.255.255"" }, ""msg"": ""Loopback0"" } ok: [wan1.cisco.com] => (item={u'interface': u'Ethernet0/1', u'ip_mask': u'255.255.255.252', u'ip_address': u'172.31.1.2', u'description': u'To-ISP1'}) => { ""invocation"": { ""module_args"": { ""msg"": ""Ethernet0/1"" }, ""module_name"": ""debug"" }, ""item"": { ""description"": ""To-ISP1"", ""interface"": ""Ethernet0/1"", ""ip_address"": ""172.31.1.2"", ""ip_mask"": ""255.255.255.252"" }, ""msg"": ""Ethernet0/1"" ``` etc. This is an extremely standard networking use case (i.e. making selective changes under defined interface parent sections). If I've messed up anything basic then I apologise in advance, but I can't see how the debug works but the real thing fails. I've even gone back to basics and manually defined the dictionary instead of calling it ",1,ios config does not allow looping under parents issue type bug report component name ios config ansible version ansible config file home ansible ios config ansible cfg configured module search path default w o overrides ansible ios config configuration default ansible cfg os environment ubuntu lts summary i am unable to use looping with items with the parents option of ios config steps to reproduce name no shut ios config provider provider lines no shutdown parents interface item interface with items interface interface interface or name no shut ios config provider provider lines no shutdown parents interface item interface with items ip addresses in host inventory ip addresses interface description router id ip address ip mask interface description to ip address ip mask interface description to ip address ip mask standby grp standby ip standby pri interface description to ip address ip mask output task path home ansible ios config ios config play yml fatal failed failed true msg the field args has an invalid value which appears to include a variable that is undefined the error was item is undefined n nthe error appears to have been in home ansible ios config ios config play yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n n name no shut n here n msg the field args has an invalid value which appears to include a variable that is undefined the error was item is undefined the error appears to have been in home ansible ios config ios config play yml line column but may be elsewhere in the file depending on the exact syntax problem the offending line appears to be name no shut here expected results the no shut command is run under interface x interface y interface z as defined in the with items list actual results with this debug i can see that the with items should work name test debug msg item interface with items ip addresses oo w task path home ansible ios config ios config play yml ok item u interface u u ip mask u u ip address u u description u router id invocation module args msg module name debug item description router id interface ip address ip mask msg ok item u interface u u ip mask u u ip address u u description u to invocation module args msg module name debug item description to interface ip address ip mask msg etc this is an extremely standard networking use case i e making selective changes under defined interface parent sections if i ve messed up anything basic then i apologise in advance but i can t see how the debug works but the real thing fails i ve even gone back to basics and manually defined the dictionary instead of calling it ,1 1051,4863245681.0,IssuesEvent,2016-11-14 14:58:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Unarchive module thinks existing local archive is nonexistent,affects_2.1 bug_report waiting_on_maintainer," Issue close to this one: #932 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - unarchive module ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION Nothing changed. ##### OS / ENVIRONMENT from Fedora 24 workstation, to RHEL 7.2 workstation ##### SUMMARY Unarchive module is unable to unzip file located on remote machine (I am using remote_src=yes), failing with: > FAILED! => {""failed"": true, ""msg"": ""file or module does not exist: /home/cdk/cdk.zip""} When using default remote_src=no, module is able to unzip /home/agajdosi/cdk.zip on local machine and put it in /home/cdk/ on remote machine. ##### STEPS TO REPRODUCE 1. Put .zip file in remote machine. 2. Write unarchive task into playbook. 3. Run playbook. ``` --- - hosts: [rhel] remote_user: cdk tasks: - name: Unzip CDK zip unarchive: remote_src: yes src: /home/cdk/cdk.zip dest: /home/cdk/ ``` ##### EXPECTED RESULTS Should unzip local cdk.zip file and extract it into home directory of user cdk. ##### ACTUAL RESULTS Fails. ``` fatal: [foo]: FAILED! => {""failed"": true, ""msg"": ""file or module does not exist: /home/cdk/cdk.zip""} ``` ",True,"Unarchive module thinks existing local archive is nonexistent - Issue close to this one: #932 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - unarchive module ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION Nothing changed. ##### OS / ENVIRONMENT from Fedora 24 workstation, to RHEL 7.2 workstation ##### SUMMARY Unarchive module is unable to unzip file located on remote machine (I am using remote_src=yes), failing with: > FAILED! => {""failed"": true, ""msg"": ""file or module does not exist: /home/cdk/cdk.zip""} When using default remote_src=no, module is able to unzip /home/agajdosi/cdk.zip on local machine and put it in /home/cdk/ on remote machine. ##### STEPS TO REPRODUCE 1. Put .zip file in remote machine. 2. Write unarchive task into playbook. 3. Run playbook. ``` --- - hosts: [rhel] remote_user: cdk tasks: - name: Unzip CDK zip unarchive: remote_src: yes src: /home/cdk/cdk.zip dest: /home/cdk/ ``` ##### EXPECTED RESULTS Should unzip local cdk.zip file and extract it into home directory of user cdk. ##### ACTUAL RESULTS Fails. ``` fatal: [foo]: FAILED! => {""failed"": true, ""msg"": ""file or module does not exist: /home/cdk/cdk.zip""} ``` ",1,unarchive module thinks existing local archive is nonexistent issue close to this one issue type bug report component name unarchive module ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables nothing changed os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific from fedora workstation to rhel workstation summary unarchive module is unable to unzip file located on remote machine i am using remote src yes failing with failed failed true msg file or module does not exist home cdk cdk zip when using default remote src no module is able to unzip home agajdosi cdk zip on local machine and put it in home cdk on remote machine steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used put zip file in remote machine write unarchive task into playbook run playbook hosts remote user cdk tasks name unzip cdk zip unarchive remote src yes src home cdk cdk zip dest home cdk expected results should unzip local cdk zip file and extract it into home directory of user cdk actual results fails fatal failed failed true msg file or module does not exist home cdk cdk zip ,1 1877,6577504988.0,IssuesEvent,2017-09-12 01:22:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,service: reload netfilter-persistent on ubuntu 16.04,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service ##### ANSIBLE VERSION ``` 2.0.1.0 ``` ##### SUMMARY service module can't reload netfilter-persistent on ubuntu 16.04 ##### STEPS TO REPRODUCE ``` yml - name: iptables reload service: name=netfilter-persistent state=reloaded ``` ##### ACTUAL RESULTS ``` fatal: [myhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failed to reload netfilter-persistent.service: Job type reload is not applicable for unit netfilter-persistent.service.\nSee system logs and 'systemctl status netfilter-persistent.service' for details.\n""} ``` But running with shell: ``` yml - name: systemd iptables reload shell: service netfilter-persistent reload ``` works.... ``` RUNNING HANDLER [internal/firewall : systemd iptables reload] ****************** changed: [myhost] [WARNING]: Consider using service module rather than running service ``` ",True,"service: reload netfilter-persistent on ubuntu 16.04 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service ##### ANSIBLE VERSION ``` 2.0.1.0 ``` ##### SUMMARY service module can't reload netfilter-persistent on ubuntu 16.04 ##### STEPS TO REPRODUCE ``` yml - name: iptables reload service: name=netfilter-persistent state=reloaded ``` ##### ACTUAL RESULTS ``` fatal: [myhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failed to reload netfilter-persistent.service: Job type reload is not applicable for unit netfilter-persistent.service.\nSee system logs and 'systemctl status netfilter-persistent.service' for details.\n""} ``` But running with shell: ``` yml - name: systemd iptables reload shell: service netfilter-persistent reload ``` works.... ``` RUNNING HANDLER [internal/firewall : systemd iptables reload] ****************** changed: [myhost] [WARNING]: Consider using service module rather than running service ``` ",1,service reload netfilter persistent on ubuntu issue type bug report component name service ansible version summary service module can t reload netfilter persistent on ubuntu steps to reproduce yml name iptables reload service name netfilter persistent state reloaded actual results fatal failed changed false failed true msg failed to reload netfilter persistent service job type reload is not applicable for unit netfilter persistent service nsee system logs and systemctl status netfilter persistent service for details n but running with shell yml name systemd iptables reload shell service netfilter persistent reload works running handler changed consider using service module rather than running service ,1 1805,6575934355.0,IssuesEvent,2017-09-11 17:53:35,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Feature Idea - add xattr support in FILE & STAT module. (chattr extended attributes in FS),affects_2.2 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME file stat ##### ANSIBLE VERSION ansible 2.2.0 (devel c9a5b1c555) last updated 2016/06/02 15:42:56 (GMT +200) lib/ansible/modules/core: (detached HEAD ca4365b644) last updated 2016/06/02 15:43:14 (GMT +200) lib/ansible/modules/extras: (detached HEAD b0aec50b9a) last updated 2016/06/02 15:43:15 (GMT +200) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### OS / ENVIRONMENT LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 7.2 (Maipo) Release: 7.2 Codename: Maipo ##### SUMMARY It would be useful add xattr (extended attributes) native support in file and stat modules. Currently, I have to use shell module (chattr and lsattr) to get xttr setting from files. ##### EXPECTED RESULTS Implementation idea: file module: Could be added an additional parameter named xattr or chattr. stat modue: Could be added an addtional field named stat.xattr ",True,"Feature Idea - add xattr support in FILE & STAT module. (chattr extended attributes in FS) - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME file stat ##### ANSIBLE VERSION ansible 2.2.0 (devel c9a5b1c555) last updated 2016/06/02 15:42:56 (GMT +200) lib/ansible/modules/core: (detached HEAD ca4365b644) last updated 2016/06/02 15:43:14 (GMT +200) lib/ansible/modules/extras: (detached HEAD b0aec50b9a) last updated 2016/06/02 15:43:15 (GMT +200) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### OS / ENVIRONMENT LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 7.2 (Maipo) Release: 7.2 Codename: Maipo ##### SUMMARY It would be useful add xattr (extended attributes) native support in file and stat modules. Currently, I have to use shell module (chattr and lsattr) to get xttr setting from files. ##### EXPECTED RESULTS Implementation idea: file module: Could be added an additional parameter named xattr or chattr. stat modue: Could be added an addtional field named stat.xattr ",1,feature idea add xattr support in file stat module chattr extended attributes in fs issue type feature idea component name file stat ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides os environment lsb version core core noarch distributor id redhatenterpriseserver description red hat enterprise linux server release maipo release codename maipo summary it would be useful add xattr extended attributes native support in file and stat modules currently i have to use shell module chattr and lsattr to get xttr setting from files expected results implementation idea file module could be added an additional parameter named xattr or chattr stat modue could be added an addtional field named stat xattr ,1 1831,6577356939.0,IssuesEvent,2017-09-12 00:20:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"linode module: ""name"" parameter required, but documentation says it isn't",affects_2.1 bug_report cloud docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME linode module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible from OS X 10.11.5 ##### SUMMARY The documentation for the linode module at http://docs.ansible.com/ansible/linode_module.html claims that the ""name"" parameter is not required, but I seem to be unable to successfully use the linode module without it, it says ""name is required for active state"". (P.S. I don't actually understand what value the ""name"" parameter should have, but using my server's hostname ""caprice"" makes the playbook run fine. What is the purpose of the ""name"" parameter, and how is it different from the ""linode_id"" parameter?) ##### STEPS TO REPRODUCE Here's a sample playbook named ""reboot.yml"": ``` --- - hosts: caprice tasks: - name: Reboot the server local_action: module: linode api_key: ""{{ linode_api_key }}"" # name: caprice linode_id: ""{{ linode_id }}"" state: restarted ``` ##### EXPECTED RESULTS I expected the playbook to run successfully without the ""name"" parameter. ##### ACTUAL RESULTS ``` Vin:ansible nelson$ ansible-playbook reboot.yml --ask-vault-pass -vvvv No config file found; using defaults Vault password: Loaded callback default of type stdout, v2.0 PLAYBOOK: reboot.yml *********************************************************** 1 plays in reboot.yml PLAY [caprice] ***************************************************************** TASK [setup] ******************************************************************* ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r caprice '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" && echo ansible-tmp-1465211261.42-104881048472456=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" ) && sleep 0'""'""'' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpjrQJWt TO /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r '[caprice]' ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r -tt caprice '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup; rm -rf ""/home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/"" > /dev/null 2>&1 && sleep 0'""'""'' ok: [caprice] TASK [Reboot the server] ******************************************************* task path: /Users/nelson/Code/server_documents/ansible/reboot.yml:5 ESTABLISH LOCAL CONNECTION FOR USER: nelson EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" && echo ansible-tmp-1465211263.04-60988620546823=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" ) && sleep 0' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpMBOJkL TO /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode; rm -rf ""/Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/"" > /dev/null 2>&1 && sleep 0' fatal: [caprice -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_key"": ""vMQG7JAhCOKxDkVogfBVMg6vMwxiow0P0Q2Pt4XSOb566Bvt6yKFFhuDyBzGYw6V"", ""datacenter"": null, ""distribution"": null, ""linode_id"": 1814698, ""name"": null, ""password"": null, ""payment_term"": 1, ""plan"": null, ""ssh_pub_key"": null, ""state"": ""restarted"", ""swap"": 512, ""wait"": true, ""wait_timeout"": ""300""}, ""module_name"": ""linode""}, ""msg"": ""name is required for active state""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @reboot.retry PLAY RECAP ********************************************************************* caprice : ok=1 changed=0 unreachable=0 failed=1 Vin:ansible nelson$ ``` ",True,"linode module: ""name"" parameter required, but documentation says it isn't - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME linode module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible from OS X 10.11.5 ##### SUMMARY The documentation for the linode module at http://docs.ansible.com/ansible/linode_module.html claims that the ""name"" parameter is not required, but I seem to be unable to successfully use the linode module without it, it says ""name is required for active state"". (P.S. I don't actually understand what value the ""name"" parameter should have, but using my server's hostname ""caprice"" makes the playbook run fine. What is the purpose of the ""name"" parameter, and how is it different from the ""linode_id"" parameter?) ##### STEPS TO REPRODUCE Here's a sample playbook named ""reboot.yml"": ``` --- - hosts: caprice tasks: - name: Reboot the server local_action: module: linode api_key: ""{{ linode_api_key }}"" # name: caprice linode_id: ""{{ linode_id }}"" state: restarted ``` ##### EXPECTED RESULTS I expected the playbook to run successfully without the ""name"" parameter. ##### ACTUAL RESULTS ``` Vin:ansible nelson$ ansible-playbook reboot.yml --ask-vault-pass -vvvv No config file found; using defaults Vault password: Loaded callback default of type stdout, v2.0 PLAYBOOK: reboot.yml *********************************************************** 1 plays in reboot.yml PLAY [caprice] ***************************************************************** TASK [setup] ******************************************************************* ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r caprice '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" && echo ansible-tmp-1465211261.42-104881048472456=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" ) && sleep 0'""'""'' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpjrQJWt TO /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r '[caprice]' ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r -tt caprice '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup; rm -rf ""/home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/"" > /dev/null 2>&1 && sleep 0'""'""'' ok: [caprice] TASK [Reboot the server] ******************************************************* task path: /Users/nelson/Code/server_documents/ansible/reboot.yml:5 ESTABLISH LOCAL CONNECTION FOR USER: nelson EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" && echo ansible-tmp-1465211263.04-60988620546823=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" ) && sleep 0' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpMBOJkL TO /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode; rm -rf ""/Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/"" > /dev/null 2>&1 && sleep 0' fatal: [caprice -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_key"": ""vMQG7JAhCOKxDkVogfBVMg6vMwxiow0P0Q2Pt4XSOb566Bvt6yKFFhuDyBzGYw6V"", ""datacenter"": null, ""distribution"": null, ""linode_id"": 1814698, ""name"": null, ""password"": null, ""payment_term"": 1, ""plan"": null, ""ssh_pub_key"": null, ""state"": ""restarted"", ""swap"": 512, ""wait"": true, ""wait_timeout"": ""300""}, ""module_name"": ""linode""}, ""msg"": ""name is required for active state""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @reboot.retry PLAY RECAP ********************************************************************* caprice : ok=1 changed=0 unreachable=0 failed=1 Vin:ansible nelson$ ``` ",1,linode module name parameter required but documentation says it isn t issue type documentation report component name linode module ansible version ansible config file configured module search path default w o overrides configuration os environment running ansible from os x summary the documentation for the linode module at claims that the name parameter is not required but i seem to be unable to successfully use the linode module without it it says name is required for active state p s i don t actually understand what value the name parameter should have but using my server s hostname caprice makes the playbook run fine what is the purpose of the name parameter and how is it different from the linode id parameter steps to reproduce here s a sample playbook named reboot yml hosts caprice tasks name reboot the server local action module linode api key linode api key name caprice linode id linode id state restarted expected results i expected the playbook to run successfully without the name parameter actual results vin ansible nelson ansible playbook reboot yml ask vault pass vvvv no config file found using defaults vault password loaded callback default of type stdout playbook reboot yml plays in reboot yml play task establish ssh connection for user none ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r caprice bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders wj fj t tmpjrqjwt to home nelson ansible tmp ansible tmp setup ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r establish ssh connection for user none ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r tt caprice bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home nelson ansible tmp ansible tmp setup rm rf home nelson ansible tmp ansible tmp dev null sleep ok task task path users nelson code server documents ansible reboot yml establish local connection for user nelson exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders wj fj t tmpmbojkl to users nelson ansible tmp ansible tmp linode exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users nelson ansible tmp ansible tmp linode rm rf users nelson ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api key datacenter null distribution null linode id name null password null payment term plan null ssh pub key null state restarted swap wait true wait timeout module name linode msg name is required for active state no more hosts left to retry use limit reboot retry play recap caprice ok changed unreachable failed vin ansible nelson ,1 1288,5448199801.0,IssuesEvent,2017-03-07 15:22:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,S3 - Delete removes entire bucket instead of provided object,affects_2.0 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible/ansible-modules-core/amazon/s3.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION No changes in ansible.cfg ##### OS / ENVIRONMENT ami-c481fad3 Amazon Linux AMI 2016.09.0 was released on 2016-09-27 ##### SUMMARY The s3 module when used with the mode: delete, it is expected to throw an error when an object is defined. Instead it deletes the entire bucket instead of the object. ##### STEPS TO REPRODUCE ``` - name: ""Delete file from S3"" local_action: module: s3 mode: delete bucket: ""files-us-east-1"" object: ""/env/stage/10/backup-10"" ``` ##### EXPECTED RESULTS Throw an error. ##### ACTUAL RESULTS It deletes the entire bucket. ``` TASK [Delete file from S3] ***************************************** task path: /vagrant/aws/local-delete-s3-bucket.yml:16 ESTABLISH LOCAL CONNECTION FOR USER: vagrant localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295 `"" )' localhost PUT /tmp/tmpHbvO9F TO /home/vagrant/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295/s3 localhost EXEC /bin/sh -c 'LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/local/prog/apps/ansible2/bin/python2.6 /home/vagrant/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295/s3; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295/"" > /dev/null 2>&1' changed: [localhost -> localhost] => {""changed"": true, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""bucket"": ""files-us-east-1"", ""dest"": null, ""ec2_url"": null, ""encrypt"": true, ""expiry"": 600, ""headers"": null, ""marker"": null, ""max_keys"": 1000, ""metadata"": null, ""mode"": ""delete"", ""object"": ""/env/stage/10/backup-10"", ""overwrite"": ""always"", ""permission"": [""private""], ""prefix"": null, ""profile"": null, ""region"": null, ""retries"": 0, ""s3_url"": null, ""security_token"": null, ""src"": null, ""validate_certs"": true, ""version"": null}, ""module_name"": ""s3""}, ""msg"": ""Bucket files-us-east-1 and all keys have been deleted.""} ``` ",True,"S3 - Delete removes entire bucket instead of provided object - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible/ansible-modules-core/amazon/s3.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION No changes in ansible.cfg ##### OS / ENVIRONMENT ami-c481fad3 Amazon Linux AMI 2016.09.0 was released on 2016-09-27 ##### SUMMARY The s3 module when used with the mode: delete, it is expected to throw an error when an object is defined. Instead it deletes the entire bucket instead of the object. ##### STEPS TO REPRODUCE ``` - name: ""Delete file from S3"" local_action: module: s3 mode: delete bucket: ""files-us-east-1"" object: ""/env/stage/10/backup-10"" ``` ##### EXPECTED RESULTS Throw an error. ##### ACTUAL RESULTS It deletes the entire bucket. ``` TASK [Delete file from S3] ***************************************** task path: /vagrant/aws/local-delete-s3-bucket.yml:16 ESTABLISH LOCAL CONNECTION FOR USER: vagrant localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295 `"" )' localhost PUT /tmp/tmpHbvO9F TO /home/vagrant/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295/s3 localhost EXEC /bin/sh -c 'LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/local/prog/apps/ansible2/bin/python2.6 /home/vagrant/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295/s3; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1477559860.67-160911494850295/"" > /dev/null 2>&1' changed: [localhost -> localhost] => {""changed"": true, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""bucket"": ""files-us-east-1"", ""dest"": null, ""ec2_url"": null, ""encrypt"": true, ""expiry"": 600, ""headers"": null, ""marker"": null, ""max_keys"": 1000, ""metadata"": null, ""mode"": ""delete"", ""object"": ""/env/stage/10/backup-10"", ""overwrite"": ""always"", ""permission"": [""private""], ""prefix"": null, ""profile"": null, ""region"": null, ""retries"": 0, ""s3_url"": null, ""security_token"": null, ""src"": null, ""validate_certs"": true, ""version"": null}, ""module_name"": ""s3""}, ""msg"": ""Bucket files-us-east-1 and all keys have been deleted.""} ``` ",1, delete removes entire bucket instead of provided object issue type bug report component name ansible ansible modules core amazon py ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes in ansible cfg os environment ami amazon linux ami was released on summary the module when used with the mode delete it is expected to throw an error when an object is defined instead it deletes the entire bucket instead of the object steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name delete file from local action module mode delete bucket files us east object env stage backup expected results throw an error actual results it deletes the entire bucket task task path vagrant aws local delete bucket yml establish local connection for user vagrant localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put tmp to home vagrant ansible tmp ansible tmp localhost exec bin sh c lang en gb utf lc all en gb utf lc messages en gb utf usr local prog apps bin home vagrant ansible tmp ansible tmp rm rf home vagrant ansible tmp ansible tmp dev null changed changed true invocation module args aws access key null aws secret key null bucket files us east dest null url null encrypt true expiry headers null marker null max keys metadata null mode delete object env stage backup overwrite always permission prefix null profile null region null retries url null security token null src null validate certs true version null module name msg bucket files us east and all keys have been deleted ,1 1022,4817003281.0,IssuesEvent,2016-11-04 12:11:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Git module does not work via http proxy,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - git module ##### ANSIBLE VERSION ``` - ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Suse 12 ##### SUMMARY Git module does not work via http proxy ##### STEPS TO REPRODUCE The connection which is initialized on the remote host, does not include proxy and is going straight to destination IP vi default gateway. Example task ``` - name: git reset environment: http_proxy: 'http://x.x.x.x:3128' https_proxy: 'http://x.x.x.x:3128' git: repo: https://bitbucket.org/account/myproject.git dest: /tmp update: yes force: yes ``` Ended with Connection timed error ``` fatal: [remote.local]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/git ls-remote origin -h refs/heads/master"", ""failed"": true, ""invocation"": {""module_args"": {""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp"", ""executable"": null, ""force"": true, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": null, ""remote"": ""origin"", ""repo"": ""https://bitbucket.org/account/myproject.git"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""HEAD""}, ""module_name"": ""git""}, ""msg"": ""fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out"", ""rc"": 128, ""stderr"": ""fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out\n"", ""stdout"": """", ""stdout_lines"": []} ``` Tcpdump on a remote machine shows ,,, 12:10:26.074852 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447144044 ecr 0,nop,wscale 7], length 0 12:10:42.090905 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447148048 ecr 0,nop,wscale 7], length 0 12:11:14.154911 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447156064 ecr 0,nop,wscale 7], length 0 ,,, ##### EXPECTED RESULTS git module, on the remote host, should initialize connection via proxy server to connect to remote repository. ",True,"Git module does not work via http proxy - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - git module ##### ANSIBLE VERSION ``` - ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Suse 12 ##### SUMMARY Git module does not work via http proxy ##### STEPS TO REPRODUCE The connection which is initialized on the remote host, does not include proxy and is going straight to destination IP vi default gateway. Example task ``` - name: git reset environment: http_proxy: 'http://x.x.x.x:3128' https_proxy: 'http://x.x.x.x:3128' git: repo: https://bitbucket.org/account/myproject.git dest: /tmp update: yes force: yes ``` Ended with Connection timed error ``` fatal: [remote.local]: FAILED! => {""changed"": false, ""cmd"": ""/usr/bin/git ls-remote origin -h refs/heads/master"", ""failed"": true, ""invocation"": {""module_args"": {""accept_hostkey"": false, ""bare"": false, ""clone"": true, ""depth"": null, ""dest"": ""/tmp"", ""executable"": null, ""force"": true, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": null, ""remote"": ""origin"", ""repo"": ""https://bitbucket.org/account/myproject.git"", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""HEAD""}, ""module_name"": ""git""}, ""msg"": ""fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out"", ""rc"": 128, ""stderr"": ""fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out\n"", ""stdout"": """", ""stdout_lines"": []} ``` Tcpdump on a remote machine shows ,,, 12:10:26.074852 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447144044 ecr 0,nop,wscale 7], length 0 12:10:42.090905 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447148048 ecr 0,nop,wscale 7], length 0 12:11:14.154911 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447156064 ecr 0,nop,wscale 7], length 0 ,,, ##### EXPECTED RESULTS git module, on the remote host, should initialize connection via proxy server to connect to remote repository. ",1,git module does not work via http proxy please do not report issues requests related to ansible modules here report them to the appropriate modules core or modules extras project also verify first that your issue request is not already reported in github issue type bug report component name git module ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific suse summary git module does not work via http proxy steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the connection which is initialized on the remote host does not include proxy and is going straight to destination ip vi default gateway example task name git reset environment http proxy https proxy git repo dest tmp update yes force yes ended with connection timed error fatal failed changed false cmd usr bin git ls remote origin h refs heads master failed true invocation module args accept hostkey false bare false clone true depth null dest tmp executable null force true key file null recursive true reference null refspec null remote origin repo ssh opts null track submodules false update true verify commit false version head module name git msg fatal unable to access failed to connect to bitbucket org port connection timed out rc stderr fatal unable to access failed to connect to bitbucket org port connection timed out n stdout stdout lines tcpdump on a remote machine shows ip remote local bitbucket org https flags seq win options length ip remote local bitbucket org https flags seq win options length ip remote local bitbucket org https flags seq win options length expected results git module on the remote host should initialize connection via proxy server to connect to remote repository ,1 922,4622717663.0,IssuesEvent,2016-09-27 08:36:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,nxos_config isn't idempotent (in some cases),affects_2.2 bug_report networking P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 547cea556f) last updated 2016/09/20 12:12:18 (GMT +100) lib/ansible/modules/core: (devel 12a7027c49) last updated 2016/09/20 15:11:43 (GMT +100) lib/ansible/modules/extras: (devel db7a3f48e1) last updated 2016/09/20 11:53:00 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE ``` - name: setup nxos_config: commands: - no description - no shutdown parents: - interface Ethernet2/5 match: none provider: ""{{ cli }}"" - name: configure device with config nxos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: - ""result.changed == true"" # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.updates is not defined"" - name: check device with config nxos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: # Idempotent test # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.changed == false"" - ""result.updates is not defined"" ``` ``` cat templates/basic/config.j2 interface Ethernet2/5 description this is a test shutdown ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` TASK [test_nxos_config : configure device with config] ************************* task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:14 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/nxos/nxos_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214 `"" && echo ansible-tmp-1474483552.07-278464852480214=""` echo $HOME/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214 `"" ) && sleep 0' PUT /tmp/tmpmSyMj7 TO /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/nxos_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/ /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/nxos_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/nxos_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/"" > /dev/null 2>&1 && sleep 0' changed: [nxos01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""nxos01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""nxos01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Ethernet2/5\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_nxos_config : assert] *********************************************** task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:21 ok: [nxos01] => { ""changed"": false, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == true"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" }, ""msg"": ""all assertions passed"" } TASK [test_nxos_config : check device with config] ***************************** task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:27 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/nxos/nxos_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702 `"" && echo ansible-tmp-1474483562.39-244603509576702=""` echo $HOME/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702 `"" ) && sleep 0' PUT /tmp/tmpTQ_P7m TO /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/nxos_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/ /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/nxos_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/nxos_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/"" > /dev/null 2>&1 && sleep 0' changed: [nxos01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""nxos01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""nxos01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Ethernet2/5\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_nxos_config : assert] *********************************************** task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:34 fatal: [nxos01]: FAILED! => { ""assertion"": ""result.changed == false"", ""changed"": false, ""evaluated_to"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == false"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" } } to retry, use: --limit @/home/johnb/git/ansible-inc/test-network-modules/nxos.retry PLAY RECAP ********************************************************************* nxos01 : ok=46 changed=12 unreachable=0 failed=1 ``` ",True,"nxos_config isn't idempotent (in some cases) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 547cea556f) last updated 2016/09/20 12:12:18 (GMT +100) lib/ansible/modules/core: (devel 12a7027c49) last updated 2016/09/20 15:11:43 (GMT +100) lib/ansible/modules/extras: (devel db7a3f48e1) last updated 2016/09/20 11:53:00 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE ``` - name: setup nxos_config: commands: - no description - no shutdown parents: - interface Ethernet2/5 match: none provider: ""{{ cli }}"" - name: configure device with config nxos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: - ""result.changed == true"" # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.updates is not defined"" - name: check device with config nxos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: # Idempotent test # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.changed == false"" - ""result.updates is not defined"" ``` ``` cat templates/basic/config.j2 interface Ethernet2/5 description this is a test shutdown ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` TASK [test_nxos_config : configure device with config] ************************* task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:14 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/nxos/nxos_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214 `"" && echo ansible-tmp-1474483552.07-278464852480214=""` echo $HOME/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214 `"" ) && sleep 0' PUT /tmp/tmpmSyMj7 TO /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/nxos_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/ /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/nxos_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/nxos_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474483552.07-278464852480214/"" > /dev/null 2>&1 && sleep 0' changed: [nxos01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""nxos01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""nxos01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Ethernet2/5\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_nxos_config : assert] *********************************************** task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:21 ok: [nxos01] => { ""changed"": false, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == true"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" }, ""msg"": ""all assertions passed"" } TASK [test_nxos_config : check device with config] ***************************** task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:27 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/nxos/nxos_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702 `"" && echo ansible-tmp-1474483562.39-244603509576702=""` echo $HOME/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702 `"" ) && sleep 0' PUT /tmp/tmpTQ_P7m TO /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/nxos_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/ /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/nxos_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/nxos_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474483562.39-244603509576702/"" > /dev/null 2>&1 && sleep 0' changed: [nxos01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""nxos01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""nxos01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Ethernet2/5\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_nxos_config : assert] *********************************************** task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_nxos_config/tests/cli/src_match_none.yaml:34 fatal: [nxos01]: FAILED! => { ""assertion"": ""result.changed == false"", ""changed"": false, ""evaluated_to"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == false"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" } } to retry, use: --limit @/home/johnb/git/ansible-inc/test-network-modules/nxos.retry PLAY RECAP ********************************************************************* nxos01 : ok=46 changed=12 unreachable=0 failed=1 ``` ",1,nxos config isn t idempotent in some cases issue type bug report component name nxos config ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary steps to reproduce name setup nxos config commands no description no shutdown parents interface match none provider cli name configure device with config nxos config src basic config provider cli match none register result assert that result changed true result updates is not defined name check device with config nxos config src basic config provider cli match none register result assert that idempotent test result changed false result updates is not defined cat templates basic config interface description this is a test shutdown expected results actual results task task path home johnb git ansible inc test network modules roles test nxos config tests cli src match none yaml using module file home johnb git ansible inc ansible lib ansible modules core network nxos nxos config py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home johnb ansible tmp ansible tmp nxos config py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp nxos config py sleep exec bin sh c python home johnb ansible tmp ansible tmp nxos config py rm rf home johnb ansible tmp ansible tmp dev null sleep changed changed true invocation module args after null auth pass null authorize false backup false before null config null defaults false force false host lines null match none parents null password value specified in no log parameter port null provider host password value specified in no log parameter transport cli username value specified in no log parameter replace line save false src interface n description this is a test n shutdown n n ssh keyfile null timeout transport cli use ssl false username value specified in no log parameter validate certs true warnings task task path home johnb git ansible inc test network modules roles test nxos config tests cli src match none yaml ok changed false invocation module args that result changed true result updates is not defined module name assert msg all assertions passed task task path home johnb git ansible inc test network modules roles test nxos config tests cli src match none yaml using module file home johnb git ansible inc ansible lib ansible modules core network nxos nxos config py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmptq to home johnb ansible tmp ansible tmp nxos config py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp nxos config py sleep exec bin sh c python home johnb ansible tmp ansible tmp nxos config py rm rf home johnb ansible tmp ansible tmp dev null sleep changed changed true invocation module args after null auth pass null authorize false backup false before null config null defaults false force false host lines null match none parents null password value specified in no log parameter port null provider host password value specified in no log parameter transport cli username value specified in no log parameter replace line save false src interface n description this is a test n shutdown n n ssh keyfile null timeout transport cli use ssl false username value specified in no log parameter validate certs true warnings task task path home johnb git ansible inc test network modules roles test nxos config tests cli src match none yaml fatal failed assertion result changed false changed false evaluated to false failed true invocation module args that result changed false result updates is not defined module name assert to retry use limit home johnb git ansible inc test network modules nxos retry play recap ok changed unreachable failed ,1 978,4729438880.0,IssuesEvent,2016-10-18 18:42:11,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git verify_commit not working for tags,affects_2.3 bug_report waiting_on_maintainer,"OS: Mac OS X 10.8.5 Git: 2.3.5 Ansible: 2.0.0-0.7.rc2 Fails when pointing to a tag: ``` sh-3.2# git --version git version 2.3.5 sh-3.2# ansible --version ansible 2.0.0 config file = configured module search path = Default w/o overrides sh-3.2# ansible all -i ""localhost,"" -c local -m git -a ""repo=git@github.com:myuser/myproj.git dest=/opt/myproj/lib version=v1.1 accept_hostkey=true key_file=/var/root/.ssh/github_id_rsa verify_commit=yes"" localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Failed to verify GPG signature of commit/tag \""v1.1\"""" } sh-3.2# cd /opt/myproj/lib sh-3.2# git tag -v v1.1 object 2111df2d45020ea53f3f7e544ced7f22ddcdeec2 type commit tag v1.1 tagger pixelrebel 1449516643 -0800 First GPG signed tag gpg: Signature made Mon Dec 7 11:30:43 2015 PST using RSA key ID A455C3A9 gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u gpg: Good signature from ""First Last (Git Signing) "" sh-3.2# git verify-tag v1.1 gpg: Signature made Mon Dec 7 11:30:43 2015 PST using RSA key ID A455C3A9 gpg: Good signature from ""First Last (Git Signing) "" ``` Perhaps this is because the git module uses `git verify-commit` exclusively? https://github.com/ansible/ansible-modules-core/blob/devel/source_control/git.py#L638 Furthermore, this may be a bug with `git`, but `git verify-commit` returns nothing regardless of whether a commit is signed or not. Also returns nothing when a commit is signed by an author whose key is not in my trustdb.",True,"git verify_commit not working for tags - OS: Mac OS X 10.8.5 Git: 2.3.5 Ansible: 2.0.0-0.7.rc2 Fails when pointing to a tag: ``` sh-3.2# git --version git version 2.3.5 sh-3.2# ansible --version ansible 2.0.0 config file = configured module search path = Default w/o overrides sh-3.2# ansible all -i ""localhost,"" -c local -m git -a ""repo=git@github.com:myuser/myproj.git dest=/opt/myproj/lib version=v1.1 accept_hostkey=true key_file=/var/root/.ssh/github_id_rsa verify_commit=yes"" localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Failed to verify GPG signature of commit/tag \""v1.1\"""" } sh-3.2# cd /opt/myproj/lib sh-3.2# git tag -v v1.1 object 2111df2d45020ea53f3f7e544ced7f22ddcdeec2 type commit tag v1.1 tagger pixelrebel 1449516643 -0800 First GPG signed tag gpg: Signature made Mon Dec 7 11:30:43 2015 PST using RSA key ID A455C3A9 gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u gpg: Good signature from ""First Last (Git Signing) "" sh-3.2# git verify-tag v1.1 gpg: Signature made Mon Dec 7 11:30:43 2015 PST using RSA key ID A455C3A9 gpg: Good signature from ""First Last (Git Signing) "" ``` Perhaps this is because the git module uses `git verify-commit` exclusively? https://github.com/ansible/ansible-modules-core/blob/devel/source_control/git.py#L638 Furthermore, this may be a bug with `git`, but `git verify-commit` returns nothing regardless of whether a commit is signed or not. Also returns nothing when a commit is signed by an author whose key is not in my trustdb.",1,git verify commit not working for tags os mac os x git ansible fails when pointing to a tag sh git version git version sh ansible version ansible config file configured module search path default w o overrides sh ansible all i localhost c local m git a repo git github com myuser myproj git dest opt myproj lib version accept hostkey true key file var root ssh github id rsa verify commit yes localhost failed changed false failed true msg failed to verify gpg signature of commit tag sh cd opt myproj lib sh git tag v object type commit tag tagger pixelrebel first gpg signed tag gpg signature made mon dec pst using rsa key id gpg checking the trustdb gpg marginal s needed complete s needed pgp trust model gpg depth valid signed trust gpg good signature from first last git signing sh git verify tag gpg signature made mon dec pst using rsa key id gpg good signature from first last git signing perhaps this is because the git module uses git verify commit exclusively furthermore this may be a bug with git but git verify commit returns nothing regardless of whether a commit is signed or not also returns nothing when a commit is signed by an author whose key is not in my trustdb ,1 953,4698543154.0,IssuesEvent,2016-10-12 13:18:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Can't get job to skip steps when a variable exists,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME set_fact ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I want this project to skip all steps only under one condition: the version and haproxy.stat.exists match. For some reason the entire playbook runs despite the two matching perfectly. ##### STEPS TO REPRODUCE ``` --- - include_vars: /var/lib/awx/projects/haproxy/roles/haproxy_role/vars/version.yml - include_vars: /var/lib/awx/projects/haproxy/roles/haproxy_role/vars/oam.yml ### Check to see if HAProxy is already installed ### - name: Check if HAProxy is already installed stat: path=/root/haproxy-1.6.9/VERSION register: haproxy ### Fetch the HAProxy version start file from remote host and copy to local server ### ### This file will contain the current version number of HAProxy ### - fetch: src: ""/root/haproxy-1.6.9/VERSION"" dest: ""/tmp/HAPROXY.txt"" flat: yes changed_when: False when: haproxy.stat.exists ### Set variable HAProxy.exists to contents of version file ### - set_fact: contents: ""{{ lookup('file', '/tmp/HAPROXY.txt') }}"" when: haproxy.stat.exists ### Placeholder variable so next steps won't fail ### ### Only used when no version of HAProxy is installed ### - set_fact: contents: """" when: haproxy.stat.exists is not defined - name: Copy HAProxy File copy: src=haproxy.tar.gz dest=/root when: ""'{{ VERSION }}' not in contents"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` I expect it to skip the Copy HAProxy File as HAProxy is installed and the version matches ``` ",True,"Can't get job to skip steps when a variable exists - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME set_fact ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I want this project to skip all steps only under one condition: the version and haproxy.stat.exists match. For some reason the entire playbook runs despite the two matching perfectly. ##### STEPS TO REPRODUCE ``` --- - include_vars: /var/lib/awx/projects/haproxy/roles/haproxy_role/vars/version.yml - include_vars: /var/lib/awx/projects/haproxy/roles/haproxy_role/vars/oam.yml ### Check to see if HAProxy is already installed ### - name: Check if HAProxy is already installed stat: path=/root/haproxy-1.6.9/VERSION register: haproxy ### Fetch the HAProxy version start file from remote host and copy to local server ### ### This file will contain the current version number of HAProxy ### - fetch: src: ""/root/haproxy-1.6.9/VERSION"" dest: ""/tmp/HAPROXY.txt"" flat: yes changed_when: False when: haproxy.stat.exists ### Set variable HAProxy.exists to contents of version file ### - set_fact: contents: ""{{ lookup('file', '/tmp/HAPROXY.txt') }}"" when: haproxy.stat.exists ### Placeholder variable so next steps won't fail ### ### Only used when no version of HAProxy is installed ### - set_fact: contents: """" when: haproxy.stat.exists is not defined - name: Copy HAProxy File copy: src=haproxy.tar.gz dest=/root when: ""'{{ VERSION }}' not in contents"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` I expect it to skip the Copy HAProxy File as HAProxy is installed and the version matches ``` ",1,can t get job to skip steps when a variable exists issue type bug report component name set fact ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment redhat summary i want this project to skip all steps only under one condition the version and haproxy stat exists match for some reason the entire playbook runs despite the two matching perfectly steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used include vars var lib awx projects haproxy roles haproxy role vars version yml include vars var lib awx projects haproxy roles haproxy role vars oam yml check to see if haproxy is already installed name check if haproxy is already installed stat path root haproxy version register haproxy fetch the haproxy version start file from remote host and copy to local server this file will contain the current version number of haproxy fetch src root haproxy version dest tmp haproxy txt flat yes changed when false when haproxy stat exists set variable haproxy exists to contents of version file set fact contents lookup file tmp haproxy txt when haproxy stat exists placeholder variable so next steps won t fail only used when no version of haproxy is installed set fact contents when haproxy stat exists is not defined name copy haproxy file copy src haproxy tar gz dest root when version not in contents expected results actual results i expect it to skip the copy haproxy file as haproxy is installed and the version matches ,1 828,4462761992.0,IssuesEvent,2016-08-24 11:09:03,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Force apt module to install a deb file if version is the same as installed version,bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/packaging/os/apt.py ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/vm/ops/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Specified roles folder, inventory file and a log path ##### OS / ENVIRONMENT N/A ##### SUMMARY A package was installed on a remote ubuntu machine with apt module in ansible. If any change is made to the default configuration file provided by the deb package, a re-run of the ansible playbook skips the installation of the deb package in the case where package name and version are the same,thereby preventing a restore of the configuration on the server. ##### STEPS TO REPRODUCE 1) Install a package on an ubuntu machine using apt module in ansible 2) Modify a default configuration file provided by the package 3) Re-run the ansible playbook which installs the deb package ##### EXPECTED RESULTS The default configuration file should have been reverted to the state which is provided by the deb package ##### ACTUAL RESULTS The changes to the configuration file persist and are not restored on replays of playbook ##### ADDITIONAL DETAILS I dug a bit into the apt.py code and I see the following in line 492 of ansible-modules-core/packaging/os/apt.py: ``` if package_version_compare(pkg_version, installed_version) == 0: # Does not need to down-/upgrade, move on to next package continue ``` This piece of code essentially prevents ansible from restoring the configuration of package in case someone went ahead and tampered with the default configuration provided by the deb package. ##### QUESTIONS 1) Although we can issue a raw command to run ""dpkg -i"" instead of using the apt module, is there any recommended way of achieving the above using the apt package which I might have missed? 2) If there is no existing functionality to achieve the above in apt module, would it make sense to give an option to allow users to prevent skipping of the deb installation in case the package name and versions match? Thanks",True,"Force apt module to install a deb file if version is the same as installed version - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/packaging/os/apt.py ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/vm/ops/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Specified roles folder, inventory file and a log path ##### OS / ENVIRONMENT N/A ##### SUMMARY A package was installed on a remote ubuntu machine with apt module in ansible. If any change is made to the default configuration file provided by the deb package, a re-run of the ansible playbook skips the installation of the deb package in the case where package name and version are the same,thereby preventing a restore of the configuration on the server. ##### STEPS TO REPRODUCE 1) Install a package on an ubuntu machine using apt module in ansible 2) Modify a default configuration file provided by the package 3) Re-run the ansible playbook which installs the deb package ##### EXPECTED RESULTS The default configuration file should have been reverted to the state which is provided by the deb package ##### ACTUAL RESULTS The changes to the configuration file persist and are not restored on replays of playbook ##### ADDITIONAL DETAILS I dug a bit into the apt.py code and I see the following in line 492 of ansible-modules-core/packaging/os/apt.py: ``` if package_version_compare(pkg_version, installed_version) == 0: # Does not need to down-/upgrade, move on to next package continue ``` This piece of code essentially prevents ansible from restoring the configuration of package in case someone went ahead and tampered with the default configuration provided by the deb package. ##### QUESTIONS 1) Although we can issue a raw command to run ""dpkg -i"" instead of using the apt module, is there any recommended way of achieving the above using the apt package which I might have missed? 2) If there is no existing functionality to achieve the above in apt module, would it make sense to give an option to allow users to prevent skipping of the deb installation in case the package name and versions match? Thanks",1,force apt module to install a deb file if version is the same as installed version issue type bug report component name ansible modules core packaging os apt py ansible version ansible config file home vm ops ansible ansible cfg configured module search path default w o overrides configuration specified roles folder inventory file and a log path os environment n a summary a package was installed on a remote ubuntu machine with apt module in ansible if any change is made to the default configuration file provided by the deb package a re run of the ansible playbook skips the installation of the deb package in the case where package name and version are the same thereby preventing a restore of the configuration on the server steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used install a package on an ubuntu machine using apt module in ansible modify a default configuration file provided by the package re run the ansible playbook which installs the deb package expected results the default configuration file should have been reverted to the state which is provided by the deb package actual results the changes to the configuration file persist and are not restored on replays of playbook additional details i dug a bit into the apt py code and i see the following in line of ansible modules core packaging os apt py if package version compare pkg version installed version does not need to down upgrade move on to next package continue this piece of code essentially prevents ansible from restoring the configuration of package in case someone went ahead and tampered with the default configuration provided by the deb package questions although we can issue a raw command to run dpkg i instead of using the apt module is there any recommended way of achieving the above using the apt package which i might have missed if there is no existing functionality to achieve the above in apt module would it make sense to give an option to allow users to prevent skipping of the deb installation in case the package name and versions match thanks,1 812,4435139810.0,IssuesEvent,2016-08-18 07:18:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,nxos_switchport not configuring trunk ports properly,bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_switchport ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides NX-OS version 7.0(3)I2(2a) ``` ##### CONFIGURATION Using defaults: ``` ansible@maas:~/ansible$ cat /etc/ansible/ansible.cfg | grep -v ^# | grep -v ^$ [defaults] [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ansible@maas:~/ansible$ ``` ##### OS / ENVIRONMENT Ubuntu 14.04, ansible installed from ppa, but the issue is not platform specific. ##### SUMMARY Trying to get a switchport to be configured for trunk however - the ""switchport access"" configuration is not removed even with the state=absent action - the actual trunk vlan list is not applied to the interface ##### STEPS TO REPRODUCE ``` Playbook: nsible@maas:~/ansible$ cat cisco.yml --- - name: vlan provisioning hosts: cisco connection: local tasks: - nxos_switchport: interface=port-channel3123 mode=access state=absent host={{ inventory_hostname }} username=""user"" password=""password"" - nxos_switchport: interface=port-channel3123 mode=trunk native_vlan=113 trunk_vlans=10-11 host={{ inventory_hostname }} username=""user"" password=""password"" ansible@maas:~/ansible$ ``` ##### EXPECTED RESULTS ``` interface port-channel3123 description c05n2 switchport mode trunk switchport trunk native vlan 113 switchport trunk allowed vlan 10-11 spanning-tree port type edge mtu 9216 vpc 3154 ``` ##### ACTUAL RESULTS Before running the playbook: ``` interface port-channel3123 description c05n2 switchport access vlan 113 spanning-tree port type edge mtu 9216 vpc 3123 ``` After running the playbook: ``` interface port-channel3123 description c05n2 switchport mode trunk switchport access vlan 113 switchport trunk native vlan 113 spanning-tree port type edge mtu 9216 vpc 3123 ``` Output from the playbook: ``` ansible@maas:~/ansible$ ansible-playbook -i hosts cisco.yml -vvvvvvvvvvv Using /etc/ansible/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: cisco.yml ************************************************************ 1 plays in cisco.yml PLAY [vlan provisioning] ******************************************************* TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: ansible EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670 `"" && echo ansible-tmp-1467214096.91-180213448017670=""` echo $HOME/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670 `"" ) && sleep 0' PUT /tmp/tmpMxf1kk TO /home/ansible/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670/setup EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670/setup; rm -rf ""/home/ansible/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670/"" > /dev/null 2>&1 && sleep 0' ok: [cisco1] TASK [nxos_switchport] ********************************************************* task path: /home/ansible/ansible/cisco.yml:8 ESTABLISH LOCAL CONNECTION FOR USER: ansible EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416 `"" && echo ansible-tmp-1467214097.53-61932177549416=""` echo $HOME/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416 `"" ) && sleep 0' PUT /tmp/tmpuViqHO TO /home/ansible/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416/nxos_switchport EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416/nxos_switchport; rm -rf ""/home/ansible/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416/"" > /dev/null 2>&1 && sleep 0' ok: [cisco1] => {""changed"": false, ""end_state"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": ""1"", ""native_vlan_name"": ""default"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""existing"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": ""1"", ""native_vlan_name"": ""default"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""invocation"": {""module_args"": {""access_vlan"": null, ""host"": ""cisco1"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""ssh_keyfile"": null, ""state"": ""absent"", ""transport"": ""cli"", ""trunk_vlans"": null, ""use_ssl"": false, ""username"": ""ansible"", ""validate_certs"": true}, ""module_name"": ""nxos_switchport""}, ""proposed"": {""interface"": ""port-channel3123"", ""mode"": ""access""}, ""state"": ""absent"", ""updates"": []} TASK [nxos_switchport] ********************************************************* task path: /home/ansible/ansible/cisco.yml:9 ESTABLISH LOCAL CONNECTION FOR USER: ansible EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525 `"" && echo ansible-tmp-1467214126.46-150234195272525=""` echo $HOME/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525 `"" ) && sleep 0' PUT /tmp/tmpcUsQx4 TO /home/ansible/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525/nxos_switchport EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525/nxos_switchport; rm -rf ""/home/ansible/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525/"" > /dev/null 2>&1 && sleep 0' changed: [cisco1] => {""changed"": true, ""end_state"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""trunk"", ""native_vlan"": ""113"", ""native_vlan_name"": ""test"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""existing"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": ""1"", ""native_vlan_name"": ""default"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""invocation"": {""module_args"": {""access_vlan"": null, ""host"": ""cisco1"", ""interface"": ""port-channel3123"", ""mode"": ""trunk"", ""native_vlan"": ""113"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""ssh_keyfile"": null, ""state"": ""present"", ""transport"": ""cli"", ""trunk_vlans"": ""10-11"", ""use_ssl"": false, ""username"": ""ansible"", ""validate_certs"": true}, ""module_name"": ""nxos_switchport""}, ""proposed"": {""interface"": ""port-channel3123"", ""mode"": ""trunk"", ""native_vlan"": ""113"", ""trunk_vlans"": ""10-11""}, ""state"": ""present"", ""updates"": [""interface port-channel3123"", ""switchport mode trunk"", ""switchport trunk native vlan 113""]} PLAY RECAP ********************************************************************* cisco1 : ok=3 changed=1 unreachable=0 failed=0 ansible@maas:~/ansible$ ``` ",True,"nxos_switchport not configuring trunk ports properly - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_switchport ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides NX-OS version 7.0(3)I2(2a) ``` ##### CONFIGURATION Using defaults: ``` ansible@maas:~/ansible$ cat /etc/ansible/ansible.cfg | grep -v ^# | grep -v ^$ [defaults] [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ansible@maas:~/ansible$ ``` ##### OS / ENVIRONMENT Ubuntu 14.04, ansible installed from ppa, but the issue is not platform specific. ##### SUMMARY Trying to get a switchport to be configured for trunk however - the ""switchport access"" configuration is not removed even with the state=absent action - the actual trunk vlan list is not applied to the interface ##### STEPS TO REPRODUCE ``` Playbook: nsible@maas:~/ansible$ cat cisco.yml --- - name: vlan provisioning hosts: cisco connection: local tasks: - nxos_switchport: interface=port-channel3123 mode=access state=absent host={{ inventory_hostname }} username=""user"" password=""password"" - nxos_switchport: interface=port-channel3123 mode=trunk native_vlan=113 trunk_vlans=10-11 host={{ inventory_hostname }} username=""user"" password=""password"" ansible@maas:~/ansible$ ``` ##### EXPECTED RESULTS ``` interface port-channel3123 description c05n2 switchport mode trunk switchport trunk native vlan 113 switchport trunk allowed vlan 10-11 spanning-tree port type edge mtu 9216 vpc 3154 ``` ##### ACTUAL RESULTS Before running the playbook: ``` interface port-channel3123 description c05n2 switchport access vlan 113 spanning-tree port type edge mtu 9216 vpc 3123 ``` After running the playbook: ``` interface port-channel3123 description c05n2 switchport mode trunk switchport access vlan 113 switchport trunk native vlan 113 spanning-tree port type edge mtu 9216 vpc 3123 ``` Output from the playbook: ``` ansible@maas:~/ansible$ ansible-playbook -i hosts cisco.yml -vvvvvvvvvvv Using /etc/ansible/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: cisco.yml ************************************************************ 1 plays in cisco.yml PLAY [vlan provisioning] ******************************************************* TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: ansible EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670 `"" && echo ansible-tmp-1467214096.91-180213448017670=""` echo $HOME/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670 `"" ) && sleep 0' PUT /tmp/tmpMxf1kk TO /home/ansible/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670/setup EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670/setup; rm -rf ""/home/ansible/.ansible/tmp/ansible-tmp-1467214096.91-180213448017670/"" > /dev/null 2>&1 && sleep 0' ok: [cisco1] TASK [nxos_switchport] ********************************************************* task path: /home/ansible/ansible/cisco.yml:8 ESTABLISH LOCAL CONNECTION FOR USER: ansible EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416 `"" && echo ansible-tmp-1467214097.53-61932177549416=""` echo $HOME/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416 `"" ) && sleep 0' PUT /tmp/tmpuViqHO TO /home/ansible/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416/nxos_switchport EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416/nxos_switchport; rm -rf ""/home/ansible/.ansible/tmp/ansible-tmp-1467214097.53-61932177549416/"" > /dev/null 2>&1 && sleep 0' ok: [cisco1] => {""changed"": false, ""end_state"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": ""1"", ""native_vlan_name"": ""default"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""existing"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": ""1"", ""native_vlan_name"": ""default"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""invocation"": {""module_args"": {""access_vlan"": null, ""host"": ""cisco1"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""ssh_keyfile"": null, ""state"": ""absent"", ""transport"": ""cli"", ""trunk_vlans"": null, ""use_ssl"": false, ""username"": ""ansible"", ""validate_certs"": true}, ""module_name"": ""nxos_switchport""}, ""proposed"": {""interface"": ""port-channel3123"", ""mode"": ""access""}, ""state"": ""absent"", ""updates"": []} TASK [nxos_switchport] ********************************************************* task path: /home/ansible/ansible/cisco.yml:9 ESTABLISH LOCAL CONNECTION FOR USER: ansible EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525 `"" && echo ansible-tmp-1467214126.46-150234195272525=""` echo $HOME/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525 `"" ) && sleep 0' PUT /tmp/tmpcUsQx4 TO /home/ansible/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525/nxos_switchport EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525/nxos_switchport; rm -rf ""/home/ansible/.ansible/tmp/ansible-tmp-1467214126.46-150234195272525/"" > /dev/null 2>&1 && sleep 0' changed: [cisco1] => {""changed"": true, ""end_state"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""trunk"", ""native_vlan"": ""113"", ""native_vlan_name"": ""test"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""existing"": {""access_vlan"": ""113"", ""access_vlan_name"": ""test"", ""interface"": ""port-channel3123"", ""mode"": ""access"", ""native_vlan"": ""1"", ""native_vlan_name"": ""default"", ""switchport"": ""Enabled"", ""trunk_vlans"": ""1-4094""}, ""invocation"": {""module_args"": {""access_vlan"": null, ""host"": ""cisco1"", ""interface"": ""port-channel3123"", ""mode"": ""trunk"", ""native_vlan"": ""113"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""ssh_keyfile"": null, ""state"": ""present"", ""transport"": ""cli"", ""trunk_vlans"": ""10-11"", ""use_ssl"": false, ""username"": ""ansible"", ""validate_certs"": true}, ""module_name"": ""nxos_switchport""}, ""proposed"": {""interface"": ""port-channel3123"", ""mode"": ""trunk"", ""native_vlan"": ""113"", ""trunk_vlans"": ""10-11""}, ""state"": ""present"", ""updates"": [""interface port-channel3123"", ""switchport mode trunk"", ""switchport trunk native vlan 113""]} PLAY RECAP ********************************************************************* cisco1 : ok=3 changed=1 unreachable=0 failed=0 ansible@maas:~/ansible$ ``` ",1,nxos switchport not configuring trunk ports properly issue type bug report component name nxos switchport ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides nx os version configuration using defaults ansible maas ansible cat etc ansible ansible cfg grep v grep v ansible maas ansible os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu ansible installed from ppa but the issue is not platform specific summary trying to get a switchport to be configured for trunk however the switchport access configuration is not removed even with the state absent action the actual trunk vlan list is not applied to the interface steps to reproduce playbook nsible maas ansible cat cisco yml name vlan provisioning hosts cisco connection local tasks nxos switchport interface port mode access state absent host inventory hostname username user password password nxos switchport interface port mode trunk native vlan trunk vlans host inventory hostname username user password password ansible maas ansible expected results interface port description switchport mode trunk switchport trunk native vlan switchport trunk allowed vlan spanning tree port type edge mtu vpc actual results before running the playbook interface port description switchport access vlan spanning tree port type edge mtu vpc after running the playbook interface port description switchport mode trunk switchport access vlan switchport trunk native vlan spanning tree port type edge mtu vpc output from the playbook ansible maas ansible ansible playbook i hosts cisco yml vvvvvvvvvvv using etc ansible ansible cfg as config file loaded callback default of type stdout playbook cisco yml plays in cisco yml play task establish local connection for user ansible exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible ansible tmp ansible tmp setup exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible ansible tmp ansible tmp setup rm rf home ansible ansible tmp ansible tmp dev null sleep ok task task path home ansible ansible cisco yml establish local connection for user ansible exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpuviqho to home ansible ansible tmp ansible tmp nxos switchport exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible ansible tmp ansible tmp nxos switchport rm rf home ansible ansible tmp ansible tmp dev null sleep ok changed false end state access vlan access vlan name test interface port mode access native vlan native vlan name default switchport enabled trunk vlans existing access vlan access vlan name test interface port mode access native vlan native vlan name default switchport enabled trunk vlans invocation module args access vlan null host interface port mode access native vlan null password value specified in no log parameter port null provider null ssh keyfile null state absent transport cli trunk vlans null use ssl false username ansible validate certs true module name nxos switchport proposed interface port mode access state absent updates task task path home ansible ansible cisco yml establish local connection for user ansible exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible ansible tmp ansible tmp nxos switchport exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible ansible tmp ansible tmp nxos switchport rm rf home ansible ansible tmp ansible tmp dev null sleep changed changed true end state access vlan access vlan name test interface port mode trunk native vlan native vlan name test switchport enabled trunk vlans existing access vlan access vlan name test interface port mode access native vlan native vlan name default switchport enabled trunk vlans invocation module args access vlan null host interface port mode trunk native vlan password value specified in no log parameter port null provider null ssh keyfile null state present transport cli trunk vlans use ssl false username ansible validate certs true module name nxos switchport proposed interface port mode trunk native vlan trunk vlans state present updates play recap ok changed unreachable failed ansible maas ansible ,1 1841,6577374380.0,IssuesEvent,2017-09-12 00:28:02,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mysql_user Provide access to mysql 5.7 installs,affects_2.0 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_user module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides` ``` ##### CONFIGURATION NONE/Using Tower ##### OS / ENVIRONMENT N/A ##### SUMMARY Cannot log in as root@localhost into mysql with null password anymore. The problem is the same as the problem mentioned here: https://forge.puppet.com/puppetlabs/mysql#mysql_datadir Ansible playbooks for mysql become deprecated because of the feature that forces root user to log in for the first time with a temporary password introduced here: Blog post from 2015 when feature was introduced: http://mysqlserverteam.com/initialize-your-mysql-5-7-instances-with-ease/ There's a workaround to launch mysqld temporaryly with root password disabled but it doesn't work in deamonized mode: https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_initialize-insecure The WORKAROUND is to scrape: /var/log/mysqld.log after mysqld starts as a serverand look for a the following: [Note] A temporary password is generated for root@localhost: O,k5.marHfFu Then parse it and use the password on the current my_sql modules. ##### STEPS TO REPRODUCE https://github.com/einarc/autoscaling-blog/tree/feature Please run in a RHEL7 instance with playbook config. ##### EXPECTED RESULTS The mysql instance can be accessed and configured: ##### ACTUAL RESULTS ``` failed: [172.16.5.197] => (item=ip-172-16-5-197) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""ip-172-16-5-197"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""ip-172-16-5-197"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""} failed: [172.16.5.197] => (item=127.0.0.1) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""127.0.0.1"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""127.0.0.1"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""} failed: [172.16.5.197] => (item=::1) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""::1"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""::1"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""} failed: [172.16.5.197] => (item=localhost) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""localhost"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""localhost"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""}` ``` ",True,"mysql_user Provide access to mysql 5.7 installs - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_user module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides` ``` ##### CONFIGURATION NONE/Using Tower ##### OS / ENVIRONMENT N/A ##### SUMMARY Cannot log in as root@localhost into mysql with null password anymore. The problem is the same as the problem mentioned here: https://forge.puppet.com/puppetlabs/mysql#mysql_datadir Ansible playbooks for mysql become deprecated because of the feature that forces root user to log in for the first time with a temporary password introduced here: Blog post from 2015 when feature was introduced: http://mysqlserverteam.com/initialize-your-mysql-5-7-instances-with-ease/ There's a workaround to launch mysqld temporaryly with root password disabled but it doesn't work in deamonized mode: https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_initialize-insecure The WORKAROUND is to scrape: /var/log/mysqld.log after mysqld starts as a serverand look for a the following: [Note] A temporary password is generated for root@localhost: O,k5.marHfFu Then parse it and use the password on the current my_sql modules. ##### STEPS TO REPRODUCE https://github.com/einarc/autoscaling-blog/tree/feature Please run in a RHEL7 instance with playbook config. ##### EXPECTED RESULTS The mysql instance can be accessed and configured: ##### ACTUAL RESULTS ``` failed: [172.16.5.197] => (item=ip-172-16-5-197) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""ip-172-16-5-197"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""ip-172-16-5-197"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""} failed: [172.16.5.197] => (item=127.0.0.1) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""127.0.0.1"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""127.0.0.1"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""} failed: [172.16.5.197] => (item=::1) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""::1"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""::1"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""} failed: [172.16.5.197] => (item=localhost) => {""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": true, ""config_file"": ""~/.my.cnf"", ""encrypted"": false, ""host"": ""localhost"", ""login_host"": ""localhost"", ""login_password"": null, ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": null, ""name"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""root""}, ""module_name"": ""mysql_user""}, ""item"": ""localhost"", ""msg"": ""unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \""Access denied for user 'root'@'localhost' (using password: NO)\"")""}` ``` ",1,mysql user provide access to mysql installs issue type feature idea component name mysql user module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none using tower os environment n a summary cannot log in as root localhost into mysql with null password anymore the problem is the same as the problem mentioned here ansible playbooks for mysql become deprecated because of the feature that forces root user to log in for the first time with a temporary password introduced here blog post from when feature was introduced there s a workaround to launch mysqld temporaryly with root password disabled but it doesn t work in deamonized mode the workaround is to scrape var log mysqld log after mysqld starts as a serverand look for a the following a temporary password is generated for root localhost o marhffu then parse it and use the password on the current my sql modules steps to reproduce please run in a instance with playbook config expected results the mysql instance can be accessed and configured actual results failed item ip failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host ip login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item ip msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no failed item failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no failed item failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no failed item localhost failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host localhost login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item localhost msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no ,1 1069,4889335256.0,IssuesEvent,2016-11-18 09:52:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add proxy server parameter to Gem module,affects_2.0 feature_idea waiting_on_maintainer,"##### Issue Type: Feature Idea ##### Ansible Version: 2.0.0.2 ##### Ansible Configuration: What have you changed about your Ansible installation? What configuration settings have you changed/added/removed? Compare your /etc/ansible/ansible.cfg against a clean version from Github and let us know what's different. ##### Environment: CentOS 6.7 (Final) ##### Summary: The current gem module does not have an attribute for specifying a proxy server. Usage of this module behind a proxy server fails. ##### Steps To Reproduce: Running the following step in my playbook from a server behind a corporate proxy server (I have no choice over that) - name: install sensu disk check plugin gem: name=sensu-plugins-disk-checks version=0.0.1 state=present executable=/opt/sensu/embedded/bin/gem ##### Expected Results: Gem is installed. ##### Actual Results: Output from the playbook ``` TASK [sensu : install sensu disk check plugin] ********************************* fatal: [mymachine.local]: FAILED! => {""changed"": false, ""cmd"": ""/opt/sensu/embedded/bin/gem install --version 0.0.1 --user-install --no-document sensu-plugins-disk-checks"", ""failed"": true, ""msg"": ""ERROR: Could not find a valid gem 'sensu-plugins-disk-checks' (= 0.0.1), here is why:\n Unable to download data from https://rubygems.org/ - Errno::ECONNREFUSED: Connection refused - connect(2) for \""api.rubygems.org\"" port 443 (https://api.rubygems.org/specs.4.8.gz)"", ""rc"": 2, ""stderr"": ""ERROR: Could not find a valid gem 'sensu-plugins-disk-checks' (= 0.0.1), here is why:\n Unable to download data from https://rubygems.org/ - Errno::ECONNREFUSED: Connection refused - connect(2) for \""api.rubygems.org\"" port 443 (https://api.rubygems.org/specs.4.8.gz)\n"", ""stdout"": """", ""stdout_lines"": []} ``` Specifying a proxy server appears to be a standard feature of the gem command, I expect it's not just me sat behind a corporate proxy server that needs this functionality. http://stackoverflow.com/questions/6172198/installing-ruby-gems I've verifued by running the command as extracted from the output above. The following command fails with the same error as above (as expected) ``` /opt/sensu/embedded/bin/gem install --version 0.0.1 --user-install --no-document sensu-plugins-disk-checks ``` Running with the proxy server option succeeds. ``` /opt/sensu/embedded/bin/gem install --version 0.0.1 --user-install --no-document --http-proxy http://myproxyservername:8080 sensu-plugins-disk-checks ``` ",True,"Add proxy server parameter to Gem module - ##### Issue Type: Feature Idea ##### Ansible Version: 2.0.0.2 ##### Ansible Configuration: What have you changed about your Ansible installation? What configuration settings have you changed/added/removed? Compare your /etc/ansible/ansible.cfg against a clean version from Github and let us know what's different. ##### Environment: CentOS 6.7 (Final) ##### Summary: The current gem module does not have an attribute for specifying a proxy server. Usage of this module behind a proxy server fails. ##### Steps To Reproduce: Running the following step in my playbook from a server behind a corporate proxy server (I have no choice over that) - name: install sensu disk check plugin gem: name=sensu-plugins-disk-checks version=0.0.1 state=present executable=/opt/sensu/embedded/bin/gem ##### Expected Results: Gem is installed. ##### Actual Results: Output from the playbook ``` TASK [sensu : install sensu disk check plugin] ********************************* fatal: [mymachine.local]: FAILED! => {""changed"": false, ""cmd"": ""/opt/sensu/embedded/bin/gem install --version 0.0.1 --user-install --no-document sensu-plugins-disk-checks"", ""failed"": true, ""msg"": ""ERROR: Could not find a valid gem 'sensu-plugins-disk-checks' (= 0.0.1), here is why:\n Unable to download data from https://rubygems.org/ - Errno::ECONNREFUSED: Connection refused - connect(2) for \""api.rubygems.org\"" port 443 (https://api.rubygems.org/specs.4.8.gz)"", ""rc"": 2, ""stderr"": ""ERROR: Could not find a valid gem 'sensu-plugins-disk-checks' (= 0.0.1), here is why:\n Unable to download data from https://rubygems.org/ - Errno::ECONNREFUSED: Connection refused - connect(2) for \""api.rubygems.org\"" port 443 (https://api.rubygems.org/specs.4.8.gz)\n"", ""stdout"": """", ""stdout_lines"": []} ``` Specifying a proxy server appears to be a standard feature of the gem command, I expect it's not just me sat behind a corporate proxy server that needs this functionality. http://stackoverflow.com/questions/6172198/installing-ruby-gems I've verifued by running the command as extracted from the output above. The following command fails with the same error as above (as expected) ``` /opt/sensu/embedded/bin/gem install --version 0.0.1 --user-install --no-document sensu-plugins-disk-checks ``` Running with the proxy server option succeeds. ``` /opt/sensu/embedded/bin/gem install --version 0.0.1 --user-install --no-document --http-proxy http://myproxyservername:8080 sensu-plugins-disk-checks ``` ",1,add proxy server parameter to gem module issue type feature idea ansible version ansible configuration what have you changed about your ansible installation what configuration settings have you changed added removed compare your etc ansible ansible cfg against a clean version from github and let us know what s different environment centos final summary the current gem module does not have an attribute for specifying a proxy server usage of this module behind a proxy server fails steps to reproduce running the following step in my playbook from a server behind a corporate proxy server i have no choice over that name install sensu disk check plugin gem name sensu plugins disk checks version state present executable opt sensu embedded bin gem expected results gem is installed actual results output from the playbook task fatal failed changed false cmd opt sensu embedded bin gem install version user install no document sensu plugins disk checks failed true msg error could not find a valid gem sensu plugins disk checks here is why n unable to download data from errno econnrefused connection refused connect for api rubygems org port rc stderr error could not find a valid gem sensu plugins disk checks here is why n unable to download data from errno econnrefused connection refused connect for api rubygems org port stdout stdout lines specifying a proxy server appears to be a standard feature of the gem command i expect it s not just me sat behind a corporate proxy server that needs this functionality i ve verifued by running the command as extracted from the output above the following command fails with the same error as above as expected opt sensu embedded bin gem install version user install no document sensu plugins disk checks running with the proxy server option succeeds opt sensu embedded bin gem install version user install no document http proxy sensu plugins disk checks ,1 1298,5541677688.0,IssuesEvent,2017-03-22 13:28:49,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,2.1.1: ec2_vpc does not update/change routing table tags,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION [defaults] inventory = ~/inventory/production host_key_checking = False log_path = ~/ansible.log [ssh_connection] pipelining = True ##### OS / ENVIRONMENT N/A ##### SUMMARY Before with Ansible 1.9.6 and the patch mentioned here: http://grokbase.com/t/gg/ansible-project/149pfx39dg/tagging-ec2-vpc-route-tables-and-gateways#20150304gdcdptnuteldhgubbx3m2wfuie , after changing the routing table tag in my playbook or on the AWS console, the routing table tags got changed to the value of my playbook on rerun. In Ansible 2.1.1 the initial tags are set, but not updated on a rerun after changing it in my playbook or on the AWS console. ##### STEPS TO REPRODUCE run playbook with for instance: ``` route_tables: - subnets: - ""10.0.1.0/24"" - ""10.0.2.0/24"" routes: - dest: 0.0.0.0/0 gw: igw resource_tags: { ""Name"" : dmz"" } ``` Now change the tags value 'dmz' to 'public' in the playbook or on the AWS console and re-run playbook. See that the tag value does not get changed ##### EXPECTED RESULTS I expect the tag value to correspond with the playbook after running it. ##### ACTUAL RESULTS Tag value did not get updated. ",True,"2.1.1: ec2_vpc does not update/change routing table tags - ##### ISSUE TYPE - Bug Report ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION [defaults] inventory = ~/inventory/production host_key_checking = False log_path = ~/ansible.log [ssh_connection] pipelining = True ##### OS / ENVIRONMENT N/A ##### SUMMARY Before with Ansible 1.9.6 and the patch mentioned here: http://grokbase.com/t/gg/ansible-project/149pfx39dg/tagging-ec2-vpc-route-tables-and-gateways#20150304gdcdptnuteldhgubbx3m2wfuie , after changing the routing table tag in my playbook or on the AWS console, the routing table tags got changed to the value of my playbook on rerun. In Ansible 2.1.1 the initial tags are set, but not updated on a rerun after changing it in my playbook or on the AWS console. ##### STEPS TO REPRODUCE run playbook with for instance: ``` route_tables: - subnets: - ""10.0.1.0/24"" - ""10.0.2.0/24"" routes: - dest: 0.0.0.0/0 gw: igw resource_tags: { ""Name"" : dmz"" } ``` Now change the tags value 'dmz' to 'public' in the playbook or on the AWS console and re-run playbook. See that the tag value does not get changed ##### EXPECTED RESULTS I expect the tag value to correspond with the playbook after running it. ##### ACTUAL RESULTS Tag value did not get updated. ",1, vpc does not update change routing table tags issue type bug report ansible version ansible configuration inventory inventory production host key checking false log path ansible log pipelining true os environment n a summary before with ansible and the patch mentioned here after changing the routing table tag in my playbook or on the aws console the routing table tags got changed to the value of my playbook on rerun in ansible the initial tags are set but not updated on a rerun after changing it in my playbook or on the aws console steps to reproduce run playbook with for instance route tables subnets routes dest gw igw resource tags name dmz now change the tags value dmz to public in the playbook or on the aws console and re run playbook see that the tag value does not get changed expected results i expect the tag value to correspond with the playbook after running it actual results tag value did not get updated ,1 1745,6574929578.0,IssuesEvent,2017-09-11 14:31:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Cannot pull all facts with nxos_facts,affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_facts ##### ANSIBLE VERSION ``` ansible --version 2.3.0 (commit c064dce) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides or 2.2.0.0-0.2.rc2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - **Local host**: Ubuntu 16.04 4.4.0 - **Target nodes**: NX-OSv 7.3(0)D1(1) (last release available in Cisco VIRL) ##### SUMMARY Running nxos_facts triggers a fatal error (connection timeout), whereas I can manually login into the target node with SSH or run nxos-feature on the same targets. ##### STEPS TO REPRODUCE ``` - include_vars: ""../defaults/{{ os_family }}/http.yml"" - include_vars: ""../defaults/{{ os_family }}/connections.yml"" - name: Fetching facts from the remote node nxos_facts: gather_subset: all provider: ""{{ connections.nxapi }}"" register: return ``` ##### EXPECTED RESULTS Successful nxos_facts ##### ACTUAL RESULTS ``` TASK [nxos_pull_facts : Fetching facts from the remote node] ******************* task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/nxos_pull_facts/tasks/main.yml:74 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py <172.21.100.12> ESTABLISH LOCAL CONNECTION FOR USER: root <172.21.100.12> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `"" && echo ansible-tmp-1476884246.75-158252726280111=""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `"" ) && sleep 0' <172.21.100.11> ESTABLISH LOCAL CONNECTION FOR USER: root <172.21.100.11> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `"" && echo ansible-tmp-1476884246.75-116017618040720=""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `"" ) && sleep 0' <172.21.100.12> PUT /tmp/tmpbsH8md TO /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py <172.21.100.11> PUT /tmp/tmpiPM3JI TO /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py <172.21.100.12> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/ /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py && sleep 0' <172.21.100.11> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/ /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py && sleep 0' <172.21.100.12> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/"" > /dev/null 2>&1 && sleep 0' <172.21.100.11> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/"" > /dev/null 2>&1 && sleep 0' fatal: [NX_OSv_Spine_11]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": false, ""gather_subset"": [ ""all"" ], ""host"": ""172.21.100.11"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 8080, ""provider"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""host"": ""172.21.100.11"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": ""8080"", ""transport"": ""nxapi"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": false }, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""nxapi"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": false }, ""module_name"": ""nxos_facts"" }, ""msg"": ""Connection failure: timed out"", ""status"": -1, ""url"": ""http://172.21.100.11:8080/ins"" } ``` ",True,"Cannot pull all facts with nxos_facts - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_facts ##### ANSIBLE VERSION ``` ansible --version 2.3.0 (commit c064dce) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides or 2.2.0.0-0.2.rc2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - **Local host**: Ubuntu 16.04 4.4.0 - **Target nodes**: NX-OSv 7.3(0)D1(1) (last release available in Cisco VIRL) ##### SUMMARY Running nxos_facts triggers a fatal error (connection timeout), whereas I can manually login into the target node with SSH or run nxos-feature on the same targets. ##### STEPS TO REPRODUCE ``` - include_vars: ""../defaults/{{ os_family }}/http.yml"" - include_vars: ""../defaults/{{ os_family }}/connections.yml"" - name: Fetching facts from the remote node nxos_facts: gather_subset: all provider: ""{{ connections.nxapi }}"" register: return ``` ##### EXPECTED RESULTS Successful nxos_facts ##### ACTUAL RESULTS ``` TASK [nxos_pull_facts : Fetching facts from the remote node] ******************* task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/nxos_pull_facts/tasks/main.yml:74 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py <172.21.100.12> ESTABLISH LOCAL CONNECTION FOR USER: root <172.21.100.12> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `"" && echo ansible-tmp-1476884246.75-158252726280111=""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `"" ) && sleep 0' <172.21.100.11> ESTABLISH LOCAL CONNECTION FOR USER: root <172.21.100.11> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `"" && echo ansible-tmp-1476884246.75-116017618040720=""` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `"" ) && sleep 0' <172.21.100.12> PUT /tmp/tmpbsH8md TO /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py <172.21.100.11> PUT /tmp/tmpiPM3JI TO /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py <172.21.100.12> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/ /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py && sleep 0' <172.21.100.11> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/ /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py && sleep 0' <172.21.100.12> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/"" > /dev/null 2>&1 && sleep 0' <172.21.100.11> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/"" > /dev/null 2>&1 && sleep 0' fatal: [NX_OSv_Spine_11]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": false, ""gather_subset"": [ ""all"" ], ""host"": ""172.21.100.11"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 8080, ""provider"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""host"": ""172.21.100.11"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": ""8080"", ""transport"": ""nxapi"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": false }, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""nxapi"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": false }, ""module_name"": ""nxos_facts"" }, ""msg"": ""Connection failure: timed out"", ""status"": -1, ""url"": ""http://172.21.100.11:8080/ins"" } ``` ",1,cannot pull all facts with nxos facts issue type bug report component name nxos facts ansible version ansible version commit config file etc ansible ansible cfg configured module search path default w o overrides or config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible git ansible roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment local host ubuntu target nodes nx osv last release available in cisco virl summary running nxos facts triggers a fatal error connection timeout whereas i can manually login into the target node with ssh or run nxos feature on the same targets steps to reproduce include vars defaults os family http yml include vars defaults os family connections yml name fetching facts from the remote node nxos facts gather subset all provider connections nxapi register return expected results successful nxos facts actual results task task path home actionmystique program files ubuntu ansible roles roles nxos pull facts tasks main yml using module file usr lib dist packages ansible modules core network nxos nxos facts py using module file usr lib dist packages ansible modules core network nxos nxos facts py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp nxos facts py put tmp to root ansible tmp ansible tmp nxos facts py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp nxos facts py sleep exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp nxos facts py sleep exec bin sh c usr bin python root ansible tmp ansible tmp nxos facts py rm rf root ansible tmp ansible tmp dev null sleep exec bin sh c usr bin python root ansible tmp ansible tmp nxos facts py rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args auth pass value specified in no log parameter authorize false gather subset all host password value specified in no log parameter port provider auth pass value specified in no log parameter host password value specified in no log parameter port transport nxapi use ssl false username admin validate certs false ssh keyfile null timeout transport nxapi use ssl false username admin validate certs false module name nxos facts msg connection failure timed out status url ,1 1129,4998415489.0,IssuesEvent,2016-12-09 19:47:06,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Using junos_config to overwrite config does not work,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config module + module_utils/junos.py ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION Stock - no changes ##### OS / ENVIRONMENT Running on Ubuntu 4.4.0.51, but should be platform independent. ##### SUMMARY The junos_config module documentation does not allow for overwriting configuration (similar to the **load override** Junos CLI command). The module documentation states that using the **replace: yes** option will work, but is considered deprecated; and to use the **update: replace** option instead. However, neither of these keywords actually work - **replace: yes** lets the playbook run, but does not actually perform a **load override** but a **load merge**; and **update: replace** fails with an unknown parameter for the module. Digging in the module and module_utils code, it seems that the expected parameter to use is actually **overwrite: yes**, but this also fails with an unknown keyword. The module also does not seem to call the load_config function from the module_utils/junos.py with the correct arguments, causing load_config to default the overwrite variable to False on init. In addition to this, it seems that the logic in the module_utils/junos.py resource file is wrong for the overwrite clause - it sets **merge = True** and **overwrite = False**; I'm guessing this should be the other way. ##### STEPS TO REPRODUCE Run a playbook with the junos_config command to any Junos device. Include a complete config as the source and try various combinations of the parameters described above (**replace: yes**, **update: replace** and **overwrite: yes**). When running with **replace: yes**; I suggest attempting this against a switch, and trying to change the VLAN of an access port (the playbook will fail stating you can only have a single VLAN on an access port, since it's merging rather than replacing) or against a router changing an interface IP address (instead of replacing, the config will add a second IP to the interface in question). The other two cases above will fail with a parameter error. ``` - name: Push config to devices hosts: it-office-switches gather_facts: no tasks: - name: Installing config junos_config: host: ""{{ junos_ip }}"" port: 22 username: ""{{ junos_user }}"" password: ""{{ junos_password }}"" update: replace comment: ""Installing baseline config via Ansible"" src: ""{{ output_dir }}/config.conf"" src_format: text ``` ##### EXPECTED RESULTS My goal was to push a config to a Junos device and have it apply it as if I ran a **load override** command. ##### ACTUAL RESULTS Configuration is either merged with the existing config (similar to a **load merge** command) when running **replace: yes** or playbook fails completely when using **update: replace** or **overwrite: yes**. ",True,"Using junos_config to overwrite config does not work - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config module + module_utils/junos.py ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION Stock - no changes ##### OS / ENVIRONMENT Running on Ubuntu 4.4.0.51, but should be platform independent. ##### SUMMARY The junos_config module documentation does not allow for overwriting configuration (similar to the **load override** Junos CLI command). The module documentation states that using the **replace: yes** option will work, but is considered deprecated; and to use the **update: replace** option instead. However, neither of these keywords actually work - **replace: yes** lets the playbook run, but does not actually perform a **load override** but a **load merge**; and **update: replace** fails with an unknown parameter for the module. Digging in the module and module_utils code, it seems that the expected parameter to use is actually **overwrite: yes**, but this also fails with an unknown keyword. The module also does not seem to call the load_config function from the module_utils/junos.py with the correct arguments, causing load_config to default the overwrite variable to False on init. In addition to this, it seems that the logic in the module_utils/junos.py resource file is wrong for the overwrite clause - it sets **merge = True** and **overwrite = False**; I'm guessing this should be the other way. ##### STEPS TO REPRODUCE Run a playbook with the junos_config command to any Junos device. Include a complete config as the source and try various combinations of the parameters described above (**replace: yes**, **update: replace** and **overwrite: yes**). When running with **replace: yes**; I suggest attempting this against a switch, and trying to change the VLAN of an access port (the playbook will fail stating you can only have a single VLAN on an access port, since it's merging rather than replacing) or against a router changing an interface IP address (instead of replacing, the config will add a second IP to the interface in question). The other two cases above will fail with a parameter error. ``` - name: Push config to devices hosts: it-office-switches gather_facts: no tasks: - name: Installing config junos_config: host: ""{{ junos_ip }}"" port: 22 username: ""{{ junos_user }}"" password: ""{{ junos_password }}"" update: replace comment: ""Installing baseline config via Ansible"" src: ""{{ output_dir }}/config.conf"" src_format: text ``` ##### EXPECTED RESULTS My goal was to push a config to a Junos device and have it apply it as if I ran a **load override** command. ##### ACTUAL RESULTS Configuration is either merged with the existing config (similar to a **load merge** command) when running **replace: yes** or playbook fails completely when using **update: replace** or **overwrite: yes**. ",1,using junos config to overwrite config does not work issue type bug report component name junos config module module utils junos py ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration stock no changes os environment running on ubuntu but should be platform independent summary the junos config module documentation does not allow for overwriting configuration similar to the load override junos cli command the module documentation states that using the replace yes option will work but is considered deprecated and to use the update replace option instead however neither of these keywords actually work replace yes lets the playbook run but does not actually perform a load override but a load merge and update replace fails with an unknown parameter for the module digging in the module and module utils code it seems that the expected parameter to use is actually overwrite yes but this also fails with an unknown keyword the module also does not seem to call the load config function from the module utils junos py with the correct arguments causing load config to default the overwrite variable to false on init in addition to this it seems that the logic in the module utils junos py resource file is wrong for the overwrite clause it sets merge true and overwrite false i m guessing this should be the other way steps to reproduce run a playbook with the junos config command to any junos device include a complete config as the source and try various combinations of the parameters described above replace yes update replace and overwrite yes when running with replace yes i suggest attempting this against a switch and trying to change the vlan of an access port the playbook will fail stating you can only have a single vlan on an access port since it s merging rather than replacing or against a router changing an interface ip address instead of replacing the config will add a second ip to the interface in question the other two cases above will fail with a parameter error name push config to devices hosts it office switches gather facts no tasks name installing config junos config host junos ip port username junos user password junos password update replace comment installing baseline config via ansible src output dir config conf src format text expected results my goal was to push a config to a junos device and have it apply it as if i ran a load override command actual results configuration is either merged with the existing config similar to a load merge command when running replace yes or playbook fails completely when using update replace or overwrite yes ,1 1894,6577538586.0,IssuesEvent,2017-09-12 01:36:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,route53: always thinks alias record has changed,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: route53 ##### Ansible Version: ``` ansible 2.0.1.0 config file = /Users/spencer/src/khaki/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: none relevant ##### Environment: OS X ##### Summary: route53 command=create for alias record without overwrite=yes always fails when record exists. It correctly detects matching record if you are not using alias records. ##### Steps To Reproduce: set up a task to create a simple alias record in Route 53 and run it now try running it again, it fails ##### Example task ``` - name: Create A record alias for proxy node route53: aws_access_key: ""{{aws_access_key}}"" aws_secret_key: ""{{aws_secret_key}}"" zone: ""larkave.com"" command: create type: A alias: True alias_hosted_zone_id: ""{{ kube_proxy_zone_id }}"" value: ""{{ kube_proxy_dns_name }}"" record: ""proxy.{{ kube_dns_domain }}"" ttl: 300 ``` #### Expected Results: ""ok: [localhost]"" ##### Actual Results: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""alias"": true, ""alias_hosted_zone_id"": ""Z33MTJ483KN6FU"", ""aws_access_key"": ""AKIAJJU7TXWYTYP4GQAA"", ""aws_secret_key"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""command"": ""create"", ""ec2_url"": null, ""failover"": null, ""health_check"": null, ""hosted_zone_id"": null, ""identifier"": null, ""overwrite"": null, ""private_zone"": false, ""profile"": null, ""record"": ""proxy.larkave.com"", ""region"": null, ""retry_interval"": 500, ""security_token"": null, ""ttl"": 300, ""type"": ""A"", ""validate_certs"": true, ""value"": ""tst-kube-proxy-1257808881.us-west-2.elb.amazonaws.com"", ""vpc_id"": null, ""weight"": null, ""zone"": ""larkave.com""}, ""module_name"": ""route53""}, ""msg"": ""Record already exists with different value. Set 'overwrite' to replace it""} ``` ",True,"route53: always thinks alias record has changed - ##### Issue Type: - Bug Report ##### Plugin Name: route53 ##### Ansible Version: ``` ansible 2.0.1.0 config file = /Users/spencer/src/khaki/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: none relevant ##### Environment: OS X ##### Summary: route53 command=create for alias record without overwrite=yes always fails when record exists. It correctly detects matching record if you are not using alias records. ##### Steps To Reproduce: set up a task to create a simple alias record in Route 53 and run it now try running it again, it fails ##### Example task ``` - name: Create A record alias for proxy node route53: aws_access_key: ""{{aws_access_key}}"" aws_secret_key: ""{{aws_secret_key}}"" zone: ""larkave.com"" command: create type: A alias: True alias_hosted_zone_id: ""{{ kube_proxy_zone_id }}"" value: ""{{ kube_proxy_dns_name }}"" record: ""proxy.{{ kube_dns_domain }}"" ttl: 300 ``` #### Expected Results: ""ok: [localhost]"" ##### Actual Results: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""alias"": true, ""alias_hosted_zone_id"": ""Z33MTJ483KN6FU"", ""aws_access_key"": ""AKIAJJU7TXWYTYP4GQAA"", ""aws_secret_key"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""command"": ""create"", ""ec2_url"": null, ""failover"": null, ""health_check"": null, ""hosted_zone_id"": null, ""identifier"": null, ""overwrite"": null, ""private_zone"": false, ""profile"": null, ""record"": ""proxy.larkave.com"", ""region"": null, ""retry_interval"": 500, ""security_token"": null, ""ttl"": 300, ""type"": ""A"", ""validate_certs"": true, ""value"": ""tst-kube-proxy-1257808881.us-west-2.elb.amazonaws.com"", ""vpc_id"": null, ""weight"": null, ""zone"": ""larkave.com""}, ""module_name"": ""route53""}, ""msg"": ""Record already exists with different value. Set 'overwrite' to replace it""} ``` ",1, always thinks alias record has changed issue type bug report plugin name ansible version ansible config file users spencer src khaki ansible ansible cfg configured module search path default w o overrides ansible configuration none relevant environment os x summary command create for alias record without overwrite yes always fails when record exists it correctly detects matching record if you are not using alias records steps to reproduce set up a task to create a simple alias record in route and run it now try running it again it fails example task name create a record alias for proxy node aws access key aws access key aws secret key aws secret key zone larkave com command create type a alias true alias hosted zone id kube proxy zone id value kube proxy dns name record proxy kube dns domain ttl expected results ok actual results fatal failed changed false failed true invocation module args alias true alias hosted zone id aws access key aws secret key value specified in no log parameter command create url null failover null health check null hosted zone id null identifier null overwrite null private zone false profile null record proxy larkave com region null retry interval security token null ttl type a validate certs true value tst kube proxy us west elb amazonaws com vpc id null weight null zone larkave com module name msg record already exists with different value set overwrite to replace it ,1 1794,6575901688.0,IssuesEvent,2017-09-11 17:46:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,sysctl: parameter value check for already defined params,affects_2.3 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME sysctl ##### ANSIBLE VERSION ``` ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Multiple roles/plays can modify the sysctl and set different values for the same parameter. My suggestion is to add a check functionality, that - if set to yes - checks if the new value is greater ( or less or equal) than the existing value. If so, it sets it, otherwise skip it. The problem at the moment is that I can't check the value and set it in one task. It doesn't look clean at the moment and is just a workaround. example follows. I know that this can't work for all parameter, because i.e. kernel.sem contains a string of parameters. But for the mayority it should be a benefit. ##### STEPS TO REPRODUCE ``` - name: check parameter fs.file-max shell: ""sysctl -a | grep fs.file-max | grep -o [0-9]*"" args: executable: /bin/bash register: fs_file_max - name: set parameter in sysctl fs.file-max sysctl: name=""fs.file-max"" value=6815744 state=present when: fs_file_max.stdout <= 6815744 # Would be nice to have something like that: - name: set parameter in sysctl fs.file-max sysctl: name=""fs.file-max"" value=6815744 state=present check=yes check-params: ge # ge - greater equal, le - less equal, e - equal ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"sysctl: parameter value check for already defined params - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME sysctl ##### ANSIBLE VERSION ``` ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Multiple roles/plays can modify the sysctl and set different values for the same parameter. My suggestion is to add a check functionality, that - if set to yes - checks if the new value is greater ( or less or equal) than the existing value. If so, it sets it, otherwise skip it. The problem at the moment is that I can't check the value and set it in one task. It doesn't look clean at the moment and is just a workaround. example follows. I know that this can't work for all parameter, because i.e. kernel.sem contains a string of parameters. But for the mayority it should be a benefit. ##### STEPS TO REPRODUCE ``` - name: check parameter fs.file-max shell: ""sysctl -a | grep fs.file-max | grep -o [0-9]*"" args: executable: /bin/bash register: fs_file_max - name: set parameter in sysctl fs.file-max sysctl: name=""fs.file-max"" value=6815744 state=present when: fs_file_max.stdout <= 6815744 # Would be nice to have something like that: - name: set parameter in sysctl fs.file-max sysctl: name=""fs.file-max"" value=6815744 state=present check=yes check-params: ge # ge - greater equal, le - less equal, e - equal ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,sysctl parameter value check for already defined params issue type feature idea component name sysctl ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary multiple roles plays can modify the sysctl and set different values for the same parameter my suggestion is to add a check functionality that if set to yes checks if the new value is greater or less or equal than the existing value if so it sets it otherwise skip it the problem at the moment is that i can t check the value and set it in one task it doesn t look clean at the moment and is just a workaround example follows i know that this can t work for all parameter because i e kernel sem contains a string of parameters but for the mayority it should be a benefit steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name check parameter fs file max shell sysctl a grep fs file max grep o args executable bin bash register fs file max name set parameter in sysctl fs file max sysctl name fs file max value state present when fs file max stdout would be nice to have something like that name set parameter in sysctl fs file max sysctl name fs file max value state present check yes check params ge ge greater equal le less equal e equal expected results actual results ,1 1111,4988627149.0,IssuesEvent,2016-12-08 09:04:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Service module no longer works with async ,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/vagrant/source/GHE/ansible-playground/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Ubuntu 14.04 (but probably many others) ##### SUMMARY On Ansible 2.1 , async could be specified on tasks using the `service` module. This was extremely useful to avoid playbooks from hanging if a service start did not return in a reasonable amount of time. At Ansible 2.2, this fails with `async mode is not supported with the service module` ##### STEPS TO REPRODUCE Create a dummy service that is guaranteed to take a certain amount of time to start. For this reproduce, create file `/etc/init/testservice.conf` , as root, with the following contents: ``` pre-start script #!/bin/bash i=0 while [ ""$i"" -lt 10 ] do echo ""Attempt $i"" sleep 2 i=$((i+1)) done exit 0 end script script echo ""Started"" end script ``` This service is guaranteed to take 20 seconds to start. Run the following playbook against localhost: ``` --- - hosts: all become: yes become_user: root become_method: sudo tasks: - name: upstart restart service: ""name=testservice state=restarted sleep=1"" async: 10 poll: 5 ignore_errors: yes register: restart_status - name: fail deploy if upstart restart failed fail: msg=""The upstart restart step failed."" when: restart_status | failed ``` ##### EXPECTED RESULTS At Ansible 2.1.2 restart timed out : `async task did not complete within the requested time` ``` PLAYBOOK: testservices.yml ***************************************************** 1 plays in testservices.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `"" && echo ansible-tmp-1478188242.21-246358132149554=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpUugRdk TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-gpflpphtwddftzrfkoriswnuymaymkrl; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/"" > /dev/null 2>&1'""'""' && sleep 0' ok: [127.0.0.1] TASK [upstart restart] ********************************************************* task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `"" && echo ansible-tmp-1478188244.41-125126621993988=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpUByg0S TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service <127.0.0.1> PUT /tmp/tmpw0Z12E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-yxjgpmoqcwfowpsyvbxridnwklvypoha; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper 164650715721 10 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service'""'""' && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ > /dev/null 2>&1 && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `"" && echo ansible-tmp-1478188250.68-192180370745608=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmprydg9E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-gepyxugizyhafzmhsvczcvexlulqhfgw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/"" > /dev/null 2>&1'""'""' && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `"" && echo ansible-tmp-1478188255.79-182646281999246=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiswcWa TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-ynrcmnmeylrhanlnoauwffgvwlzuwbgq; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/"" > /dev/null 2>&1'""'""' && sleep 0' fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""async task did not complete within the requested time""} ...ignoring TASK [fail deploy if upstart restart failed] *********************************** task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13 fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""msg"": ""The upstart restart step failed.""}, ""module_name"": ""fail""}, ""msg"": ""The upstart restart step failed.""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry PLAY RECAP ********************************************************************* 127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1 ``` ##### ACTUAL RESULTS At Ansible 2.2 : `async mode is not supported with the service module` ``` PLAYBOOK: testservices.yml ***************************************************** 1 plays in testservices.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `"" && echo ansible-tmp-1478188095.18-113784857458696=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmphWWy6b TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-bulczgwqvdnbjvnyovwlypoyymqngdvk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/"" > /dev/null 2>&1'""'""' && sleep 0' ok: [127.0.0.1] TASK [upstart restart] ********************************************************* task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7 fatal: [127.0.0.1]: FAILED! => { ""failed"": true, ""msg"": ""async mode is not supported with the service module"" } ...ignoring TASK [fail deploy if upstart restart failed] *********************************** task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13 fatal: [127.0.0.1]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""msg"": ""The upstart restart step failed."" }, ""module_name"": ""fail"" }, ""msg"": ""The upstart restart step failed."" } to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry PLAY RECAP ********************************************************************* 127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1 ``` ",True,"Service module no longer works with async - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME service module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/vagrant/source/GHE/ansible-playground/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Ubuntu 14.04 (but probably many others) ##### SUMMARY On Ansible 2.1 , async could be specified on tasks using the `service` module. This was extremely useful to avoid playbooks from hanging if a service start did not return in a reasonable amount of time. At Ansible 2.2, this fails with `async mode is not supported with the service module` ##### STEPS TO REPRODUCE Create a dummy service that is guaranteed to take a certain amount of time to start. For this reproduce, create file `/etc/init/testservice.conf` , as root, with the following contents: ``` pre-start script #!/bin/bash i=0 while [ ""$i"" -lt 10 ] do echo ""Attempt $i"" sleep 2 i=$((i+1)) done exit 0 end script script echo ""Started"" end script ``` This service is guaranteed to take 20 seconds to start. Run the following playbook against localhost: ``` --- - hosts: all become: yes become_user: root become_method: sudo tasks: - name: upstart restart service: ""name=testservice state=restarted sleep=1"" async: 10 poll: 5 ignore_errors: yes register: restart_status - name: fail deploy if upstart restart failed fail: msg=""The upstart restart step failed."" when: restart_status | failed ``` ##### EXPECTED RESULTS At Ansible 2.1.2 restart timed out : `async task did not complete within the requested time` ``` PLAYBOOK: testservices.yml ***************************************************** 1 plays in testservices.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `"" && echo ansible-tmp-1478188242.21-246358132149554=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpUugRdk TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-gpflpphtwddftzrfkoriswnuymaymkrl; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/"" > /dev/null 2>&1'""'""' && sleep 0' ok: [127.0.0.1] TASK [upstart restart] ********************************************************* task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `"" && echo ansible-tmp-1478188244.41-125126621993988=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpUByg0S TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service <127.0.0.1> PUT /tmp/tmpw0Z12E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-yxjgpmoqcwfowpsyvbxridnwklvypoha; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper 164650715721 10 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service'""'""' && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ > /dev/null 2>&1 && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `"" && echo ansible-tmp-1478188250.68-192180370745608=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmprydg9E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-gepyxugizyhafzmhsvczcvexlulqhfgw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/"" > /dev/null 2>&1'""'""' && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `"" && echo ansible-tmp-1478188255.79-182646281999246=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiswcWa TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-ynrcmnmeylrhanlnoauwffgvwlzuwbgq; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/"" > /dev/null 2>&1'""'""' && sleep 0' fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""async task did not complete within the requested time""} ...ignoring TASK [fail deploy if upstart restart failed] *********************************** task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13 fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""msg"": ""The upstart restart step failed.""}, ""module_name"": ""fail""}, ""msg"": ""The upstart restart step failed.""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry PLAY RECAP ********************************************************************* 127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1 ``` ##### ACTUAL RESULTS At Ansible 2.2 : `async mode is not supported with the service module` ``` PLAYBOOK: testservices.yml ***************************************************** 1 plays in testservices.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `"" && echo ansible-tmp-1478188095.18-113784857458696=""` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmphWWy6b TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-bulczgwqvdnbjvnyovwlypoyymqngdvk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/"" > /dev/null 2>&1'""'""' && sleep 0' ok: [127.0.0.1] TASK [upstart restart] ********************************************************* task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7 fatal: [127.0.0.1]: FAILED! => { ""failed"": true, ""msg"": ""async mode is not supported with the service module"" } ...ignoring TASK [fail deploy if upstart restart failed] *********************************** task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13 fatal: [127.0.0.1]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""msg"": ""The upstart restart step failed."" }, ""module_name"": ""fail"" }, ""msg"": ""The upstart restart step failed."" } to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry PLAY RECAP ********************************************************************* 127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1 ``` ",1,service module no longer works with async issue type bug report component name service module ansible version ansible config file home vagrant source ghe ansible playground ansible cfg configured module search path default w o overrides configuration n a os environment ubuntu but probably many others summary on ansible async could be specified on tasks using the service module this was extremely useful to avoid playbooks from hanging if a service start did not return in a reasonable amount of time at ansible this fails with async mode is not supported with the service module steps to reproduce create a dummy service that is guaranteed to take a certain amount of time to start for this reproduce create file etc init testservice conf as root with the following contents pre start script bin bash i while do echo attempt i sleep i i done exit end script script echo started end script this service is guaranteed to take seconds to start run the following playbook against localhost hosts all become yes become user root become method sudo tasks name upstart restart service name testservice state restarted sleep async poll ignore errors yes register restart status name fail deploy if upstart restart failed fail msg the upstart restart step failed when restart status failed expected results at ansible restart timed out async task did not complete within the requested time playbook testservices yml plays in testservices yml play task establish local connection for user vagrant exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpuugrdk to home vagrant ansible tmp ansible tmp setup exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup sleep exec bin sh c sudo h s n u root bin sh c echo become success gpflpphtwddftzrfkoriswnuymaymkrl lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp setup rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant source ghe ansible playground testservices yml establish local connection for user vagrant exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp service put tmp to home vagrant ansible tmp ansible tmp async wrapper exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp service home vagrant ansible tmp ansible tmp async wrapper sleep exec bin sh c sudo h s n u root bin sh c echo become success yxjgpmoqcwfowpsyvbxridnwklvypoha lang en us utf lc all en us utf lc messages en us utf home vagrant ansible tmp ansible tmp async wrapper home vagrant ansible tmp ansible tmp service sleep exec bin sh c rm f r home vagrant ansible tmp ansible tmp dev null sleep exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp async status exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp async status sleep exec bin sh c sudo h s n u root bin sh c echo become success gepyxugizyhafzmhsvczcvexlulqhfgw lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp async status rm rf home vagrant ansible tmp ansible tmp dev null sleep exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiswcwa to home vagrant ansible tmp ansible tmp async status exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp async status sleep exec bin sh c sudo h s n u root bin sh c echo become success ynrcmnmeylrhanlnoauwffgvwlzuwbgq lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp async status rm rf home vagrant ansible tmp ansible tmp dev null sleep fatal failed changed false failed true msg async task did not complete within the requested time ignoring task task path home vagrant source ghe ansible playground testservices yml fatal failed changed false failed true invocation module args msg the upstart restart step failed module name fail msg the upstart restart step failed no more hosts left to retry use limit home vagrant source ghe ansible playground testservices retry play recap ok changed unreachable failed actual results at ansible async mode is not supported with the service module playbook testservices yml plays in testservices yml play task using module file usr local lib dist packages ansible modules core system setup py establish local connection for user vagrant exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep exec bin sh c sudo h s n u root bin sh c echo become success bulczgwqvdnbjvnyovwlypoyymqngdvk usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant source ghe ansible playground testservices yml fatal failed failed true msg async mode is not supported with the service module ignoring task task path home vagrant source ghe ansible playground testservices yml fatal failed changed false failed true invocation module args msg the upstart restart step failed module name fail msg the upstart restart step failed to retry use limit home vagrant source ghe ansible playground testservices retry play recap ok changed unreachable failed ,1 1913,6577578665.0,IssuesEvent,2017-09-12 01:53:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Create VM with vsphere_guest from template and change IP address,affects_2.3 cloud feature_idea vmware waiting_on_maintainer,"##### ISSUE TYPE Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY I need a way to create virtual machine (VM) from template and change ip address in the creation process. ",True,"Create VM with vsphere_guest from template and change IP address - ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY I need a way to create virtual machine (VM) from template and change ip address in the creation process. ",1,create vm with vsphere guest from template and change ip address issue type feature idea component name vsphere guest ansible version n a summary i need a way to create virtual machine vm from template and change ip address in the creation process ,1 1135,4998638906.0,IssuesEvent,2016-12-09 20:31:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 module: profile option ignored,affects_1.9 aws bug_report cloud waiting_on_maintainer,"**Issue Type**: Bug Report **Ansible Version**: 1.9 **Ansible Configuration**: default **Environment**: OSX 10.9.5 **Summary**: The ""profile"" option in the ec2 module is ignored **Steps To Reproduce**: Create a playbook like so and run it. The account in which `subnet-0000000` exists must not be in your default aws profile (defined in `~/.aws/{credentials,config}`: ``` - name: Make an instance hosts: all connection: local gather_facts: False tasks: - name: Manage instances ec2: profile: myprofile region: us-east-1 wait: yes group_id: sg-000000 key_name: mykey vpc_subnet_id: subnet-0000000 instance_type: m3.medium instance_tags: Name: my-test image: ami-0000000 volumes: - device_name: /dev/sdb ephemeral: ephemeral0 ``` **Expected Results**: An instance is created as specified **Actual Results**: The following error occurs: ``` failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/Users/me/.ansible/tmp/ansible-tmp-1436472344.45-200995340429191/ec2"", line 3054, in main() File ""/Users/me/.ansible/tmp/ansible-tmp-1436472344.45-200995340429191/ec2"", line 1246, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File ""/Users/me/.ansible/tmp/ansible-tmp-1436472344.45-200995340429191/ec2"", line 789, in create_instances vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id File ""/Library/Python/2.7/site-packages/boto/vpc/__init__.py"", line 1153, in get_all_subnets return self.get_list('DescribeSubnets', params, [('item', Subnet)]) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1185, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request InvalidSubnetID.NotFoundThe subnet ID 'subnet-000000' does not existxxxxxxxxxxxxxxxxxx ``` Changing the profile in which that subnet exists to the default profile allows the playbook to run as expected. ",True,"ec2 module: profile option ignored - **Issue Type**: Bug Report **Ansible Version**: 1.9 **Ansible Configuration**: default **Environment**: OSX 10.9.5 **Summary**: The ""profile"" option in the ec2 module is ignored **Steps To Reproduce**: Create a playbook like so and run it. The account in which `subnet-0000000` exists must not be in your default aws profile (defined in `~/.aws/{credentials,config}`: ``` - name: Make an instance hosts: all connection: local gather_facts: False tasks: - name: Manage instances ec2: profile: myprofile region: us-east-1 wait: yes group_id: sg-000000 key_name: mykey vpc_subnet_id: subnet-0000000 instance_type: m3.medium instance_tags: Name: my-test image: ami-0000000 volumes: - device_name: /dev/sdb ephemeral: ephemeral0 ``` **Expected Results**: An instance is created as specified **Actual Results**: The following error occurs: ``` failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/Users/me/.ansible/tmp/ansible-tmp-1436472344.45-200995340429191/ec2"", line 3054, in main() File ""/Users/me/.ansible/tmp/ansible-tmp-1436472344.45-200995340429191/ec2"", line 1246, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File ""/Users/me/.ansible/tmp/ansible-tmp-1436472344.45-200995340429191/ec2"", line 789, in create_instances vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id File ""/Library/Python/2.7/site-packages/boto/vpc/__init__.py"", line 1153, in get_all_subnets return self.get_list('DescribeSubnets', params, [('item', Subnet)]) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1185, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request InvalidSubnetID.NotFoundThe subnet ID 'subnet-000000' does not existxxxxxxxxxxxxxxxxxx ``` Changing the profile in which that subnet exists to the default profile allows the playbook to run as expected. ",1, module profile option ignored issue type bug report ansible version ansible configuration default environment osx summary the profile option in the module is ignored steps to reproduce create a playbook like so and run it the account in which subnet exists must not be in your default aws profile defined in aws credentials config name make an instance hosts all connection local gather facts false tasks name manage instances profile myprofile region us east wait yes group id sg key name mykey vpc subnet id subnet instance type medium instance tags name my test image ami volumes device name dev sdb ephemeral expected results an instance is created as specified actual results the following error occurs failed failed true parsed false traceback most recent call last file users me ansible tmp ansible tmp line in main file users me ansible tmp ansible tmp line in main instance dict array new instance ids changed create instances module vpc file users me ansible tmp ansible tmp line in create instances vpc id vpc get all subnets subnet ids vpc id file library python site packages boto vpc init py line in get all subnets return self get list describesubnets params file library python site packages boto connection py line in get list raise self responseerror response status response reason body boto exception bad request invalidsubnetid notfound the subnet id subnet does not exist xxxxxxxxxxxxxxxxxx changing the profile in which that subnet exists to the default profile allows the playbook to run as expected ,1 1683,6574154290.0,IssuesEvent,2017-09-11 11:43:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Update URI documentation to remove deprecated example and add new one,affects_2.1 docs_report waiting_on_maintainer," ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME uri ##### ANSIBLE VERSION ``` 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Trusty ##### SUMMARY In the documentation for the uri module (http://docs.ansible.com/ansible/uri_module.html) the examples only show examples for HEADER_, which is deprecated as of 2.1 in favour of the ""headers"" argument. I forked the repo and set upI'm not really sure how to create a PR. ##### STEPS TO REPRODUCE ``` - HEADER_Content-Type: ""application/x-www-form-urlencoded"" + headers: ""{'Content-Type': 'application/json'}"" ``` ##### EXPECTED RESULTS Example showing how to use the headers argument ##### ACTUAL RESULTS It didn't show it ``` ``` ",True,"Update URI documentation to remove deprecated example and add new one - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME uri ##### ANSIBLE VERSION ``` 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Trusty ##### SUMMARY In the documentation for the uri module (http://docs.ansible.com/ansible/uri_module.html) the examples only show examples for HEADER_, which is deprecated as of 2.1 in favour of the ""headers"" argument. I forked the repo and set upI'm not really sure how to create a PR. ##### STEPS TO REPRODUCE ``` - HEADER_Content-Type: ""application/x-www-form-urlencoded"" + headers: ""{'Content-Type': 'application/json'}"" ``` ##### EXPECTED RESULTS Example showing how to use the headers argument ##### ACTUAL RESULTS It didn't show it ``` ``` ",1,update uri documentation to remove deprecated example and add new one issue type documentation report component name uri ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific trusty summary in the documentation for the uri module the examples only show examples for header which is deprecated as of in favour of the headers argument i forked the repo and set upi m not really sure how to create a pr steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used header content type application x www form urlencoded headers content type application json expected results example showing how to use the headers argument actual results it didn t show it ,1 1750,6574944280.0,IssuesEvent,2017-09-11 14:34:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Bug Report: Fetch fails if ansible_ssh_host is localhost,affects_2.1 bug_report waiting_on_maintainer,"ISSUE TYPE Bug Report COMPONENT NAME fetch ANSIBLE VERSION 2.1.0.0 CONFIGURATION These are the most relevant config items though I don't know that there is a correlation: ssh_args = -o ControlMaster=auto -o ControlPersist=60s OS / ENVIRONMENT N/A SUMMARY When ansible_ssh_host is set to localhost, fetch says it succeeds, but it never gets the file. It's important that localhost works the same as every other host value for testing purposes. I previously opened an issue for this (https://github.com/ansible/ansible-modules-core/issues/4814) but believe it to be closed in error. The symptoms are the same as another issue in which the user didn't have enough space on the target filesystem, but that is not the root cause in this case. STEPS TO REPRODUCE Use the play below with ansible_ssh_host set to localhost hosts: '{{ hosts }}' gather_facts: False tasks: fetch: src: /tmp/remote_file dest: /tmp/local_file flat: true fail_on_missing: true EXPECTED RESULTS I would expect the behavior to be the same for localhost as it is for every other host. ACTUAL RESULTS Fetch says it succeeds, but verbose output actually shows this error (doesn't matter if the file exists or not). ok: [localhost] => {""changed"": false, ""file"": ""/tmp/remote_file"", ""invocation"": {""module_args"": {""dest"": ""/tmp/local_file"", ""fail_on_missing"": true, ""flat"": true, ""src"": ""/tmp/remote_file""}, ""module_name"": ""fetch""}, ""msg"": ""unable to calculate the checksum of the remote file""} ",True,"Bug Report: Fetch fails if ansible_ssh_host is localhost - ISSUE TYPE Bug Report COMPONENT NAME fetch ANSIBLE VERSION 2.1.0.0 CONFIGURATION These are the most relevant config items though I don't know that there is a correlation: ssh_args = -o ControlMaster=auto -o ControlPersist=60s OS / ENVIRONMENT N/A SUMMARY When ansible_ssh_host is set to localhost, fetch says it succeeds, but it never gets the file. It's important that localhost works the same as every other host value for testing purposes. I previously opened an issue for this (https://github.com/ansible/ansible-modules-core/issues/4814) but believe it to be closed in error. The symptoms are the same as another issue in which the user didn't have enough space on the target filesystem, but that is not the root cause in this case. STEPS TO REPRODUCE Use the play below with ansible_ssh_host set to localhost hosts: '{{ hosts }}' gather_facts: False tasks: fetch: src: /tmp/remote_file dest: /tmp/local_file flat: true fail_on_missing: true EXPECTED RESULTS I would expect the behavior to be the same for localhost as it is for every other host. ACTUAL RESULTS Fetch says it succeeds, but verbose output actually shows this error (doesn't matter if the file exists or not). ok: [localhost] => {""changed"": false, ""file"": ""/tmp/remote_file"", ""invocation"": {""module_args"": {""dest"": ""/tmp/local_file"", ""fail_on_missing"": true, ""flat"": true, ""src"": ""/tmp/remote_file""}, ""module_name"": ""fetch""}, ""msg"": ""unable to calculate the checksum of the remote file""} ",1,bug report fetch fails if ansible ssh host is localhost issue type bug report component name fetch ansible version configuration these are the most relevant config items though i don t know that there is a correlation ssh args o controlmaster auto o controlpersist os environment n a summary when ansible ssh host is set to localhost fetch says it succeeds but it never gets the file it s important that localhost works the same as every other host value for testing purposes i previously opened an issue for this but believe it to be closed in error the symptoms are the same as another issue in which the user didn t have enough space on the target filesystem but that is not the root cause in this case steps to reproduce use the play below with ansible ssh host set to localhost hosts hosts gather facts false tasks fetch src tmp remote file dest tmp local file flat true fail on missing true expected results i would expect the behavior to be the same for localhost as it is for every other host actual results fetch says it succeeds but verbose output actually shows this error doesn t matter if the file exists or not ok changed false file tmp remote file invocation module args dest tmp local file fail on missing true flat true src tmp remote file module name fetch msg unable to calculate the checksum of the remote file ,1 1781,6575830765.0,IssuesEvent,2017-09-11 17:29:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,stat.exists regression,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME stat ##### ANSIBLE VERSION ``` 2.1.2.0 ``` ##### OS / ENVIRONMENT Travis CI ##### SUMMARY My playbook that tries to check whether file exists or not started to fails with ansible 2.1.2.0 release. It worked on 2.1.1.0 version. ##### STEPS TO REPRODUCE ``` - name: Getting info about WAR file stat: path: 'target/mystamps.war' get_checksum: no get_md5: no register: war_file become: no delegate_to: 127.0.0.1 - name: Ensuring that WAR file exists assert: that: war_file.stat.exists become: no delegate_to: 127.0.0.1 ``` ##### EXPECTED RESULTS ``` TASK [Getting info about WAR file] ********************************************* ok: [my-stamps.ru -> 127.0.0.1] TASK [Ensuring that WAR file exists] ******************************************* ok: [my-stamps.ru -> 127.0.0.1] ``` ##### ACTUAL RESULTS ``` TASK [Getting info about WAR file] ********************************************* ok: [my-stamps.ru -> 127.0.0.1] TASK [Ensuring that WAR file exists] ******************************************* fatal: [my-stamps.ru -> 127.0.0.1]: FAILED! => {""assertion"": ""war_file.stat.exists"", ""changed"": false, ""evaluated_to"": false, ""failed"": true} ``` ",True,"stat.exists regression - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME stat ##### ANSIBLE VERSION ``` 2.1.2.0 ``` ##### OS / ENVIRONMENT Travis CI ##### SUMMARY My playbook that tries to check whether file exists or not started to fails with ansible 2.1.2.0 release. It worked on 2.1.1.0 version. ##### STEPS TO REPRODUCE ``` - name: Getting info about WAR file stat: path: 'target/mystamps.war' get_checksum: no get_md5: no register: war_file become: no delegate_to: 127.0.0.1 - name: Ensuring that WAR file exists assert: that: war_file.stat.exists become: no delegate_to: 127.0.0.1 ``` ##### EXPECTED RESULTS ``` TASK [Getting info about WAR file] ********************************************* ok: [my-stamps.ru -> 127.0.0.1] TASK [Ensuring that WAR file exists] ******************************************* ok: [my-stamps.ru -> 127.0.0.1] ``` ##### ACTUAL RESULTS ``` TASK [Getting info about WAR file] ********************************************* ok: [my-stamps.ru -> 127.0.0.1] TASK [Ensuring that WAR file exists] ******************************************* fatal: [my-stamps.ru -> 127.0.0.1]: FAILED! => {""assertion"": ""war_file.stat.exists"", ""changed"": false, ""evaluated_to"": false, ""failed"": true} ``` ",1,stat exists regression issue type bug report component name stat ansible version os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific travis ci summary my playbook that tries to check whether file exists or not started to fails with ansible release it worked on version steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name getting info about war file stat path target mystamps war get checksum no get no register war file become no delegate to name ensuring that war file exists assert that war file stat exists become no delegate to expected results task ok task ok actual results task ok task fatal failed assertion war file stat exists changed false evaluated to false failed true ,1 1915,6577706212.0,IssuesEvent,2017-09-12 02:44:55,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,AWS Route53 limited in options for setting up routes (weighted latency not available),affects_2.0 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: - cloud/amazon/route53 ##### Ansible Version: ansible 2.0.0.2 config file = /home/db/.ansible.cfg configured module search path = Default w/o overrides ##### Ansible Configuration: None ##### Environment: Ubuntu 12.04 running on a VirtualBox VM. ##### Summary: I have manually built some infrastructure in AWS and I want to automate the build of that with Ansible. This manually built infrastructure uses weighted latency based routing. However, I cannot read back the information fully from the facts nor is there any option for setting this up. Speaking with @defionscode at Ansiblefest 2016 last Thursday, I was asked to raise this issue for a fix. ##### Steps To Reproduce: Not a bug - feature request. ##### Expected Results: Simply trying to get the results back of the current routes to reflect the configuration available through the console. ##### Actual Results: Not applicable. ",True,"AWS Route53 limited in options for setting up routes (weighted latency not available) - ##### Issue Type: - Feature Idea ##### Plugin Name: - cloud/amazon/route53 ##### Ansible Version: ansible 2.0.0.2 config file = /home/db/.ansible.cfg configured module search path = Default w/o overrides ##### Ansible Configuration: None ##### Environment: Ubuntu 12.04 running on a VirtualBox VM. ##### Summary: I have manually built some infrastructure in AWS and I want to automate the build of that with Ansible. This manually built infrastructure uses weighted latency based routing. However, I cannot read back the information fully from the facts nor is there any option for setting this up. Speaking with @defionscode at Ansiblefest 2016 last Thursday, I was asked to raise this issue for a fix. ##### Steps To Reproduce: Not a bug - feature request. ##### Expected Results: Simply trying to get the results back of the current routes to reflect the configuration available through the console. ##### Actual Results: Not applicable. ",1,aws limited in options for setting up routes weighted latency not available issue type feature idea plugin name cloud amazon ansible version ansible config file home db ansible cfg configured module search path default w o overrides ansible configuration none environment ubuntu running on a virtualbox vm summary i have manually built some infrastructure in aws and i want to automate the build of that with ansible this manually built infrastructure uses weighted latency based routing however i cannot read back the information fully from the facts nor is there any option for setting this up speaking with defionscode at ansiblefest last thursday i was asked to raise this issue for a fix steps to reproduce not a bug feature request expected results simply trying to get the results back of the current routes to reflect the configuration available through the console actual results not applicable ,1 1789,6575881306.0,IssuesEvent,2017-09-11 17:41:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 badly handles non ec2 instance related limits,affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/ec2.py ##### ANSIBLE VERSION ``` Using: ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides But, I have checked devel branch in this repo and the issue seems still not handled ``` ##### CONFIGURATION ##### OS / ENVIRONMENT GNU/Linux (Ubuntu 14.04.x and 16.04.x) 64-bit. ##### SUMMARY `ec2.py` module does not report well failures coming from **non instance** related **limits** ##### STEPS TO REPRODUCE Just try creating `N` instances with `M` EBS volumes each, making sure to exceed volume related limit. ``` - name: create nodes local_action: module: ec2 params, args count: ``` ##### EXPECTED RESULTS At the least I need the invocation to fail with meaningful error: 1. fail the module invocation, and print error message explaining which limit was actually exceeded. In a perfect world, I would also expect playbook to gracefully fail with ""rollback"": 1. destroy already created nodes 2. print failure message as explained before 3. fail the task ##### ACTUAL RESULTS ``` 20:56:39 TASK [my-aws-bootstrap : create cluster nodes] ******************************* 20:56:39 task path: /var/lib/jenkins/jobs/lab-start/workspace/mypipeline/playbooks/roles/aws-bootstrap/tasks/main.yml:32 20:56:39 ESTABLISH LOCAL CONNECTION FOR USER: jenkins 20:56:39 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python && sleep 0' 21:16:42 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 16, ""count_tag"": null, ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": null, ""group"": [""group1""], ""group_id"": null, ""id"": null, ""image"": ""ami-xxxxxxxx"", ""instance_ids"": null, ""instance_profile_name"": null, ""instance_tags"": {""Environment"": ""pipeline-lab"", ""Name"": ""node"", ""lab_id"": ""lab-100"", ""user"": ""user1""}, ""instance_type"": ""i2.8xlarge"", ""kernel"": null, ""key_name"": ""userkey"", ""monitoring"": true, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""us-east-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": false, ""user_data"": null, ""validate_certs"": true, ""volumes"": [{""delete_on_termination"": true, ""device_name"": ""/dev/xvda"", ""volume_size"": 1000}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdl"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdm"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdn"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""device_name"": ""/dev/xvdd"", ""ephemeral"": ""ephemeral0""}, {""device_name"": ""/dev/xvde"", ""ephemeral"": ""ephemeral1""}, {""device_name"": ""/dev/xvdf"", ""ephemeral"": ""ephemeral2""}, {""device_name"": ""/dev/xvdg"", ""ephemeral"": ""ephemeral3""}, {""device_name"": ""/dev/xvdh"", ""ephemeral"": ""ephemeral4""}, {""device_name"": ""/dev/xvdi"", ""ephemeral"": ""ephemeral5""}, {""device_name"": ""/dev/xvdj"", ""ephemeral"": ""ephemeral6""}, {""device_name"": ""/dev/xvdk"", ""ephemeral"": ""ephemeral7""}], ""vpc_subnet_id"": ""subnet-xxxxxx"", ""wait"": true, ""wait_timeout"": ""1200"", ""zone"": ""us-east-1a""}, ""module_name"": ""ec2""}, ""msg"": ""wait for instances running timeout on Mon Sep 26 21:16:42 2016""} ``` ``` ``` ",True,"ec2 badly handles non ec2 instance related limits - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/ec2.py ##### ANSIBLE VERSION ``` Using: ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides But, I have checked devel branch in this repo and the issue seems still not handled ``` ##### CONFIGURATION ##### OS / ENVIRONMENT GNU/Linux (Ubuntu 14.04.x and 16.04.x) 64-bit. ##### SUMMARY `ec2.py` module does not report well failures coming from **non instance** related **limits** ##### STEPS TO REPRODUCE Just try creating `N` instances with `M` EBS volumes each, making sure to exceed volume related limit. ``` - name: create nodes local_action: module: ec2 params, args count: ``` ##### EXPECTED RESULTS At the least I need the invocation to fail with meaningful error: 1. fail the module invocation, and print error message explaining which limit was actually exceeded. In a perfect world, I would also expect playbook to gracefully fail with ""rollback"": 1. destroy already created nodes 2. print failure message as explained before 3. fail the task ##### ACTUAL RESULTS ``` 20:56:39 TASK [my-aws-bootstrap : create cluster nodes] ******************************* 20:56:39 task path: /var/lib/jenkins/jobs/lab-start/workspace/mypipeline/playbooks/roles/aws-bootstrap/tasks/main.yml:32 20:56:39 ESTABLISH LOCAL CONNECTION FOR USER: jenkins 20:56:39 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python && sleep 0' 21:16:42 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 16, ""count_tag"": null, ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": null, ""group"": [""group1""], ""group_id"": null, ""id"": null, ""image"": ""ami-xxxxxxxx"", ""instance_ids"": null, ""instance_profile_name"": null, ""instance_tags"": {""Environment"": ""pipeline-lab"", ""Name"": ""node"", ""lab_id"": ""lab-100"", ""user"": ""user1""}, ""instance_type"": ""i2.8xlarge"", ""kernel"": null, ""key_name"": ""userkey"", ""monitoring"": true, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""us-east-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": false, ""user_data"": null, ""validate_certs"": true, ""volumes"": [{""delete_on_termination"": true, ""device_name"": ""/dev/xvda"", ""volume_size"": 1000}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdl"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdm"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdn"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""device_name"": ""/dev/xvdd"", ""ephemeral"": ""ephemeral0""}, {""device_name"": ""/dev/xvde"", ""ephemeral"": ""ephemeral1""}, {""device_name"": ""/dev/xvdf"", ""ephemeral"": ""ephemeral2""}, {""device_name"": ""/dev/xvdg"", ""ephemeral"": ""ephemeral3""}, {""device_name"": ""/dev/xvdh"", ""ephemeral"": ""ephemeral4""}, {""device_name"": ""/dev/xvdi"", ""ephemeral"": ""ephemeral5""}, {""device_name"": ""/dev/xvdj"", ""ephemeral"": ""ephemeral6""}, {""device_name"": ""/dev/xvdk"", ""ephemeral"": ""ephemeral7""}], ""vpc_subnet_id"": ""subnet-xxxxxx"", ""wait"": true, ""wait_timeout"": ""1200"", ""zone"": ""us-east-1a""}, ""module_name"": ""ec2""}, ""msg"": ""wait for instances running timeout on Mon Sep 26 21:16:42 2016""} ``` ``` ``` ",1, badly handles non instance related limits issue type bug report component name cloud amazon py ansible version using ansible config file etc ansible ansible cfg configured module search path default w o overrides but i have checked devel branch in this repo and the issue seems still not handled configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific gnu linux ubuntu x and x bit summary py module does not report well failures coming from non instance related limits steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used just try creating n instances with m ebs volumes each making sure to exceed volume related limit name create nodes local action module params args count expected results at the least i need the invocation to fail with meaningful error fail the module invocation and print error message explaining which limit was actually exceeded in a perfect world i would also expect playbook to gracefully fail with rollback destroy already created nodes print failure message as explained before fail the task actual results task task path var lib jenkins jobs lab start workspace mypipeline playbooks roles aws bootstrap tasks main yml establish local connection for user jenkins exec bin sh c lang en us utf lc all en us utf lc messages en us utf python sleep fatal failed changed false failed true invocation module args assign public ip false aws access key null aws secret key null count count tag null ebs optimized false url null exact count null group group id null id null image ami xxxxxxxx instance ids null instance profile name null instance tags environment pipeline lab name node lab id lab user instance type kernel null key name userkey monitoring true network interfaces null placement group null private ip null profile null ramdisk null region us east security token null source dest check true spot launch group null spot price null spot type one time spot wait timeout state present tenancy default termination protection false user data null validate certs true volumes vpc subnet id subnet xxxxxx wait true wait timeout zone us east module name msg wait for instances running timeout on mon sep ,1 1842,6577379308.0,IssuesEvent,2017-09-12 00:30:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,iam.py - existing user not added to existing group,affects_1.9 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME modules/core/cloud/amazon/iam.py ##### ANSIBLE VERSION ``` ansible 1.9.4 ``` ##### CONFIGURATION Default. ##### OS / ENVIRONMENT CentOS Linux release 7.2.1511 (Core) ##### SUMMARY The IAM user is not added to the IAM group. ##### STEPS TO REPRODUCE ``` - name: Create IAM user iam: iam_type: user name: proj_user path: '/' state: present - name: Add IAM user to IAM groups iam: iam_type: user name: proj_user path: '/' state: update groups: TestGroup ``` Followed the example given here http://docs.ansible.com/ansible/iam_module.html ##### EXPECTED RESULTS The proj_user should become a member of TestGroup. ##### ACTUAL RESULTS Test Group is empty but module returns status as changed which is not true. Sequential re-runs do not change the situation. ``` TASK: [Add IAM user to IAM groups] ********************************* changed: [127.0.0.1] => {""changed"": true, ""groups"": [""TestGroup""], ""keys"": {}, ""user_name"": ""proj_user""} ``` ",True,"iam.py - existing user not added to existing group - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME modules/core/cloud/amazon/iam.py ##### ANSIBLE VERSION ``` ansible 1.9.4 ``` ##### CONFIGURATION Default. ##### OS / ENVIRONMENT CentOS Linux release 7.2.1511 (Core) ##### SUMMARY The IAM user is not added to the IAM group. ##### STEPS TO REPRODUCE ``` - name: Create IAM user iam: iam_type: user name: proj_user path: '/' state: present - name: Add IAM user to IAM groups iam: iam_type: user name: proj_user path: '/' state: update groups: TestGroup ``` Followed the example given here http://docs.ansible.com/ansible/iam_module.html ##### EXPECTED RESULTS The proj_user should become a member of TestGroup. ##### ACTUAL RESULTS Test Group is empty but module returns status as changed which is not true. Sequential re-runs do not change the situation. ``` TASK: [Add IAM user to IAM groups] ********************************* changed: [127.0.0.1] => {""changed"": true, ""groups"": [""TestGroup""], ""keys"": {}, ""user_name"": ""proj_user""} ``` ",1,iam py existing user not added to existing group issue type bug report component name modules core cloud amazon iam py ansible version ansible configuration default os environment centos linux release core summary the iam user is not added to the iam group steps to reproduce name create iam user iam iam type user name proj user path state present name add iam user to iam groups iam iam type user name proj user path state update groups testgroup followed the example given here expected results the proj user should become a member of testgroup actual results test group is empty but module returns status as changed which is not true sequential re runs do not change the situation task changed changed true groups keys user name proj user ,1 1880,6577510511.0,IssuesEvent,2017-09-12 01:25:11,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,allow iam group assignment without wiping out all other groups,affects_2.0 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Component Name: iam module ##### Ansible Version: ``` 2.0.1.0 ``` ##### Ansible Configuration: NA ##### Environment: NA ##### Summary: It's currently impossible to assign an IAM group to a user without wiping out all other groups. The only way to assign groups is in an iam module task: ``` - iam: iam_type=user name=user_name state=present groups=""{{ iam_groups }}"" ``` However, this always wipes existing groups, losing data for existing users if you don't already know all of the groups assigned to the user. Other modules with group assignments use another parameter to allow appending to lists (e.g. mysql_user has ""append_privs"" and ec2_group has ""purge_rules"") ##### Steps To Reproduce: ``` vars: groups1: - groups1_example groups2: - groups2_example tasks: - iam: iam_type=user name=user_name state=present groups=""{{ groups1 }}"" - iam: iam_type=user name=user_name state=present groups=""{{ groups2 }}"" ``` user will only have groups2_example group assigned. ##### Expected Results: user would belong to both groups1_example and groups2_example ##### Actual Results: user will only have groups2_example group assigned. ",True,"allow iam group assignment without wiping out all other groups - ##### Issue Type: - Feature Idea ##### Component Name: iam module ##### Ansible Version: ``` 2.0.1.0 ``` ##### Ansible Configuration: NA ##### Environment: NA ##### Summary: It's currently impossible to assign an IAM group to a user without wiping out all other groups. The only way to assign groups is in an iam module task: ``` - iam: iam_type=user name=user_name state=present groups=""{{ iam_groups }}"" ``` However, this always wipes existing groups, losing data for existing users if you don't already know all of the groups assigned to the user. Other modules with group assignments use another parameter to allow appending to lists (e.g. mysql_user has ""append_privs"" and ec2_group has ""purge_rules"") ##### Steps To Reproduce: ``` vars: groups1: - groups1_example groups2: - groups2_example tasks: - iam: iam_type=user name=user_name state=present groups=""{{ groups1 }}"" - iam: iam_type=user name=user_name state=present groups=""{{ groups2 }}"" ``` user will only have groups2_example group assigned. ##### Expected Results: user would belong to both groups1_example and groups2_example ##### Actual Results: user will only have groups2_example group assigned. ",1,allow iam group assignment without wiping out all other groups issue type feature idea component name iam module ansible version ansible configuration na environment na summary it s currently impossible to assign an iam group to a user without wiping out all other groups the only way to assign groups is in an iam module task iam iam type user name user name state present groups iam groups however this always wipes existing groups losing data for existing users if you don t already know all of the groups assigned to the user other modules with group assignments use another parameter to allow appending to lists e g mysql user has append privs and group has purge rules steps to reproduce vars example example tasks iam iam type user name user name state present groups iam iam type user name user name state present groups user will only have example group assigned expected results user would belong to both example and example actual results user will only have example group assigned ,1 1908,6577567505.0,IssuesEvent,2017-09-12 01:48:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vpc remove doesn't work as documented,affects_2.0 aws bug_report cloud docs_report waiting_on_maintainer,"##### Issue Type: bug report Module: ec2_vpc ##### Ansible Version: ansible 2.0.0.2 ##### Summary: Error in documentation: http://docs.ansible.com/ansible/ec2_vpc_module.html#examples ##### Steps To Reproduce: Documentation states: ``` # Removal of a VPC by id ec2_vpc: state: absent vpc_id: vpc-aaaaaaa region: us-west-2 ``` However, when run, there is an error: ``` TASK [delete vpc] ************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""missing required arguments: resource_tags""} ``` So it isn't possible to remove a vpc just by stating its id. ",True,"ec2_vpc remove doesn't work as documented - ##### Issue Type: bug report Module: ec2_vpc ##### Ansible Version: ansible 2.0.0.2 ##### Summary: Error in documentation: http://docs.ansible.com/ansible/ec2_vpc_module.html#examples ##### Steps To Reproduce: Documentation states: ``` # Removal of a VPC by id ec2_vpc: state: absent vpc_id: vpc-aaaaaaa region: us-west-2 ``` However, when run, there is an error: ``` TASK [delete vpc] ************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""missing required arguments: resource_tags""} ``` So it isn't possible to remove a vpc just by stating its id. ",1, vpc remove doesn t work as documented issue type bug report module vpc ansible version ansible summary error in documentation steps to reproduce documentation states removal of a vpc by id vpc state absent vpc id vpc aaaaaaa region us west however when run there is an error task fatal failed changed false failed true msg missing required arguments resource tags so it isn t possible to remove a vpc just by stating its id ,1 1083,4931796273.0,IssuesEvent,2016-11-28 11:24:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"junos_config: obj, end = self.raw_decode(s, idx=_w(s, 0).end()) -> TypeError: expected string or buffer",affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config ##### ANSIBLE VERSION ``` ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE ``` Run Junos test cases, or manually: - name: configure device with config junos_config: src: basic/config.j2 provider: ""{{ netconf }}"" register: result templates/basic/config.j2 interfaces { lo0 { unit 0 { family inet { address 1.1.1.1/32; } } } } ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ```yaml TASK [junos_config : configure device with config] ***************************** task path: /tmp/run-network-test/junos/devel/ansible/test/integration/targets/junos_config/tests/netconf/src_basic.yaml:11 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/junos/junos_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774 `"" && echo ansible-tmp-1479324113.09-83039427345774=""` echo $HOME/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774 `"" ) && sleep 0' PUT /tmp/tmpj7mocy TO /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/junos_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/ /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/junos_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/junos_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 343, in main() File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 334, in main run(module, result) File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 292, in run return load_config(module, result) File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 245, in load_config config_format = module.params['src_format'] or guess_format(candidate) File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 176, in guess_format json.loads(config) File ""/usr/lib/python2.7/json/__init__.py"", line 339, in loads return _default_decoder.decode(s) File ""/usr/lib/python2.7/json/decoder.py"", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) TypeError: expected string or buffer fatal: [vsrx01]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""provider"": { ""host"": ""vsrx01"", ""password"": ""Ansible"", ""transport"": ""netconf"", ""username"": ""ansible"" }, ""src"": ""interfaces {\n lo0 {\n unit 0 {\n family inet {\n address 1.1.1.1/32;\n }\n }\n }\n}\n\n"" }, ""module_name"": ""junos_config"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 343, in \n main()\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 334, in main\n run(module, result)\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 292, in run\n return load_config(module, result)\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 245, in load_config\n config_format = module.params['src_format'] or guess_format(candidate)\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 176, in guess_format\n json.loads(config)\n File \""/usr/lib/python2.7/json/__init__.py\"", line 339, in loads\n return _default_decoder.decode(s)\n File \""/usr/lib/python2.7/json/decoder.py\"", line 364, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\nTypeError: expected string or buffer\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" ``` ",True,"junos_config: obj, end = self.raw_decode(s, idx=_w(s, 0).end()) -> TypeError: expected string or buffer - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config ##### ANSIBLE VERSION ``` ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE ``` Run Junos test cases, or manually: - name: configure device with config junos_config: src: basic/config.j2 provider: ""{{ netconf }}"" register: result templates/basic/config.j2 interfaces { lo0 { unit 0 { family inet { address 1.1.1.1/32; } } } } ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ```yaml TASK [junos_config : configure device with config] ***************************** task path: /tmp/run-network-test/junos/devel/ansible/test/integration/targets/junos_config/tests/netconf/src_basic.yaml:11 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/junos/junos_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774 `"" && echo ansible-tmp-1479324113.09-83039427345774=""` echo $HOME/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774 `"" ) && sleep 0' PUT /tmp/tmpj7mocy TO /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/junos_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/ /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/junos_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/junos_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1479324113.09-83039427345774/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 343, in main() File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 334, in main run(module, result) File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 292, in run return load_config(module, result) File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 245, in load_config config_format = module.params['src_format'] or guess_format(candidate) File ""/tmp/ansible_5AgCUV/ansible_module_junos_config.py"", line 176, in guess_format json.loads(config) File ""/usr/lib/python2.7/json/__init__.py"", line 339, in loads return _default_decoder.decode(s) File ""/usr/lib/python2.7/json/decoder.py"", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) TypeError: expected string or buffer fatal: [vsrx01]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""provider"": { ""host"": ""vsrx01"", ""password"": ""Ansible"", ""transport"": ""netconf"", ""username"": ""ansible"" }, ""src"": ""interfaces {\n lo0 {\n unit 0 {\n family inet {\n address 1.1.1.1/32;\n }\n }\n }\n}\n\n"" }, ""module_name"": ""junos_config"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 343, in \n main()\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 334, in main\n run(module, result)\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 292, in run\n return load_config(module, result)\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 245, in load_config\n config_format = module.params['src_format'] or guess_format(candidate)\n File \""/tmp/ansible_5AgCUV/ansible_module_junos_config.py\"", line 176, in guess_format\n json.loads(config)\n File \""/usr/lib/python2.7/json/__init__.py\"", line 339, in loads\n return _default_decoder.decode(s)\n File \""/usr/lib/python2.7/json/decoder.py\"", line 364, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\nTypeError: expected string or buffer\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" ``` ",1,junos config obj end self raw decode s idx w s end typeerror expected string or buffer issue type bug report component name junos config ansible version configuration os environment summary steps to reproduce run junos test cases or manually name configure device with config junos config src basic config provider netconf register result templates basic config interfaces unit family inet address expected results actual results yaml task task path tmp run network test junos devel ansible test integration targets junos config tests netconf src basic yaml using module file usr local lib dist packages ansible modules core network junos junos config py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home johnb ansible tmp ansible tmp junos config py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp junos config py sleep exec bin sh c python home johnb ansible tmp ansible tmp junos config py rm rf home johnb ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module junos config py line in main file tmp ansible ansible module junos config py line in main run module result file tmp ansible ansible module junos config py line in run return load config module result file tmp ansible ansible module junos config py line in load config config format module params or guess format candidate file tmp ansible ansible module junos config py line in guess format json loads config file usr lib json init py line in loads return default decoder decode s file usr lib json decoder py line in decode obj end self raw decode s idx w s end typeerror expected string or buffer fatal failed changed false failed true invocation module args provider host password ansible transport netconf username ansible src interfaces n n unit n family inet n address n n n n n n module name junos config module stderr traceback most recent call last n file tmp ansible ansible module junos config py line in n main n file tmp ansible ansible module junos config py line in main n run module result n file tmp ansible ansible module junos config py line in run n return load config module result n file tmp ansible ansible module junos config py line in load config n config format module params or guess format candidate n file tmp ansible ansible module junos config py line in guess format n json loads config n file usr lib json init py line in loads n return default decoder decode s n file usr lib json decoder py line in decode n obj end self raw decode s idx w s end ntypeerror expected string or buffer n module stdout msg module failure ,1 1679,6574141070.0,IssuesEvent,2017-09-11 11:40:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Issue connecting when using nxos_acl,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_acl ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/robert.nanney/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Can not connect with transport: cli to the device ##### STEPS TO REPRODUCE ``` --- - hosts: all connection: local gather_facts: false vars: cli: transport: cli tasks: - name: show hostname nxos_command: commands: show hostname host: ""{{ inventory_hostname }}"" provider: ""{{ cli }}"" register: hostname - name: debug hostname debug: var=hostname - name: db acl nxos_acl: name: cloud-dbaas-snet-in seq: 10 action: permit proto: tcp src: any dest: any state: present host: ""{{inventory_hostname}}"" provider: ""{{ cli }}"" ``` ##### EXPECTED RESULTS I expected a connection to the switch ##### ACTUAL RESULTS ``` ~$ ansible-playbook db_acl_test.yml -i lab3k -vvvv Using /home/robert.nanney/ansible.cfg as config file Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: db_acl_test.yml ****************************************************** 1 plays in db_acl_test.yml PLAY [all] ********************************************************************* TASK [show hostname] *********************************************************** task path: /home/robert.nanney/db_acl_test.yml:10 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.127.49.121> ESTABLISH LOCAL CONNECTION FOR USER: robert.nanney <10.127.49.121> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789 `"" && echo ansible-tmp-1479507500.82-139993289090789=""` echo $HOME/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789 `"" ) && sleep 0' <10.127.49.121> PUT /tmp/tmpffKXca TO /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/nxos_command.py <10.127.49.121> EXEC /bin/sh -c 'chmod u+x /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/ /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/nxos_command.py && sleep 0' <10.127.49.121> EXEC /bin/sh -c '/usr/bin/python /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/nxos_command.py; rm -rf ""/home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/"" > /dev/null 2>&1 && sleep 0' ok: [10.127.49.121] => { ""changed"": false, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""show hostname"" ], ""host"": ""10.127.49.121"", ""interval"": 1, ""match"": ""all"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""transport"": ""cli"" }, ""retries"": 10, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""nxos_command"" }, ""stdout"": [ ""\nyc3-5-3.sat6 \n"" ], ""stdout_lines"": [ [ """", ""yc3-5-3.sat6 "", """" ] ], ""warnings"": [] } TASK [debug hostname] ********************************************************** task path: /home/robert.nanney/db_acl_test.yml:17 ok: [10.127.49.121] => { ""hostname"": { ""changed"": false, ""stdout"": [ ""\nyc3-5-3.sat6 \n"" ], ""stdout_lines"": [ [ """", ""yc3-5-3.sat6 "", """" ] ], ""warnings"": [] } } TASK [db acl] ****************************************************************** task path: /home/robert.nanney/db_acl_test.yml:20 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_acl.py <10.127.49.121> ESTABLISH LOCAL CONNECTION FOR USER: robert.nanney <10.127.49.121> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841 `"" && echo ansible-tmp-1479507503.1-137909917353841=""` echo $HOME/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841 `"" ) && sleep 0' <10.127.49.121> PUT /tmp/tmp9S6m5r TO /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/nxos_acl.py <10.127.49.121> EXEC /bin/sh -c 'chmod u+x /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/ /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/nxos_acl.py && sleep 0' <10.127.49.121> EXEC /bin/sh -c '/usr/bin/python /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/nxos_acl.py; rm -rf ""/home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/"" > /dev/null 2>&1 && sleep 0' fatal: [10.127.49.121]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""ack"": null, ""action"": ""permit"", ""auth_pass"": null, ""authorize"": false, ""config"": null, ""dest"": ""any"", ""dest_port1"": null, ""dest_port2"": null, ""dest_port_op"": null, ""dscp"": null, ""established"": null, ""fin"": null, ""fragments"": null, ""host"": ""10.127.49.121"", ""include_defaults"": ""False"", ""log"": null, ""name"": ""cloud-dbaas-snet-in"", ""password"": null, ""port"": null, ""precedence"": null, ""proto"": ""tcp"", ""protocol"": ""http"", ""provider"": { ""transport"": ""cli"" }, ""psh"": null, ""remark"": null, ""rst"": null, ""save"": false, ""seq"": ""10"", ""src"": ""any"", ""src_port1"": null, ""src_port2"": null, ""src_port_op"": null, ""ssh_keyfile"": null, ""state"": ""present"", ""syn"": null, ""time_range"": null, ""timeout"": 10, ""transport"": ""cli"", ""urg"": null, ""use_ssl"": false, ""username"": null, ""validate_certs"": true }, ""module_name"": ""nxos_acl"" }, ""msg"": ""failed to connect to 10.127.49.121:22"" } to retry, use: --limit @/home/robert.nanney/db_acl_test.retry PLAY RECAP ********************************************************************* 10.127.49.121 : ok=2 changed=0 unreachable=0 failed=1 ``` ",True,"Issue connecting when using nxos_acl - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_acl ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/robert.nanney/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Can not connect with transport: cli to the device ##### STEPS TO REPRODUCE ``` --- - hosts: all connection: local gather_facts: false vars: cli: transport: cli tasks: - name: show hostname nxos_command: commands: show hostname host: ""{{ inventory_hostname }}"" provider: ""{{ cli }}"" register: hostname - name: debug hostname debug: var=hostname - name: db acl nxos_acl: name: cloud-dbaas-snet-in seq: 10 action: permit proto: tcp src: any dest: any state: present host: ""{{inventory_hostname}}"" provider: ""{{ cli }}"" ``` ##### EXPECTED RESULTS I expected a connection to the switch ##### ACTUAL RESULTS ``` ~$ ansible-playbook db_acl_test.yml -i lab3k -vvvv Using /home/robert.nanney/ansible.cfg as config file Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: db_acl_test.yml ****************************************************** 1 plays in db_acl_test.yml PLAY [all] ********************************************************************* TASK [show hostname] *********************************************************** task path: /home/robert.nanney/db_acl_test.yml:10 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.127.49.121> ESTABLISH LOCAL CONNECTION FOR USER: robert.nanney <10.127.49.121> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789 `"" && echo ansible-tmp-1479507500.82-139993289090789=""` echo $HOME/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789 `"" ) && sleep 0' <10.127.49.121> PUT /tmp/tmpffKXca TO /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/nxos_command.py <10.127.49.121> EXEC /bin/sh -c 'chmod u+x /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/ /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/nxos_command.py && sleep 0' <10.127.49.121> EXEC /bin/sh -c '/usr/bin/python /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/nxos_command.py; rm -rf ""/home/robert.nanney/.ansible/tmp/ansible-tmp-1479507500.82-139993289090789/"" > /dev/null 2>&1 && sleep 0' ok: [10.127.49.121] => { ""changed"": false, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""show hostname"" ], ""host"": ""10.127.49.121"", ""interval"": 1, ""match"": ""all"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""transport"": ""cli"" }, ""retries"": 10, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""nxos_command"" }, ""stdout"": [ ""\nyc3-5-3.sat6 \n"" ], ""stdout_lines"": [ [ """", ""yc3-5-3.sat6 "", """" ] ], ""warnings"": [] } TASK [debug hostname] ********************************************************** task path: /home/robert.nanney/db_acl_test.yml:17 ok: [10.127.49.121] => { ""hostname"": { ""changed"": false, ""stdout"": [ ""\nyc3-5-3.sat6 \n"" ], ""stdout_lines"": [ [ """", ""yc3-5-3.sat6 "", """" ] ], ""warnings"": [] } } TASK [db acl] ****************************************************************** task path: /home/robert.nanney/db_acl_test.yml:20 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_acl.py <10.127.49.121> ESTABLISH LOCAL CONNECTION FOR USER: robert.nanney <10.127.49.121> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841 `"" && echo ansible-tmp-1479507503.1-137909917353841=""` echo $HOME/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841 `"" ) && sleep 0' <10.127.49.121> PUT /tmp/tmp9S6m5r TO /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/nxos_acl.py <10.127.49.121> EXEC /bin/sh -c 'chmod u+x /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/ /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/nxos_acl.py && sleep 0' <10.127.49.121> EXEC /bin/sh -c '/usr/bin/python /home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/nxos_acl.py; rm -rf ""/home/robert.nanney/.ansible/tmp/ansible-tmp-1479507503.1-137909917353841/"" > /dev/null 2>&1 && sleep 0' fatal: [10.127.49.121]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""ack"": null, ""action"": ""permit"", ""auth_pass"": null, ""authorize"": false, ""config"": null, ""dest"": ""any"", ""dest_port1"": null, ""dest_port2"": null, ""dest_port_op"": null, ""dscp"": null, ""established"": null, ""fin"": null, ""fragments"": null, ""host"": ""10.127.49.121"", ""include_defaults"": ""False"", ""log"": null, ""name"": ""cloud-dbaas-snet-in"", ""password"": null, ""port"": null, ""precedence"": null, ""proto"": ""tcp"", ""protocol"": ""http"", ""provider"": { ""transport"": ""cli"" }, ""psh"": null, ""remark"": null, ""rst"": null, ""save"": false, ""seq"": ""10"", ""src"": ""any"", ""src_port1"": null, ""src_port2"": null, ""src_port_op"": null, ""ssh_keyfile"": null, ""state"": ""present"", ""syn"": null, ""time_range"": null, ""timeout"": 10, ""transport"": ""cli"", ""urg"": null, ""use_ssl"": false, ""username"": null, ""validate_certs"": true }, ""module_name"": ""nxos_acl"" }, ""msg"": ""failed to connect to 10.127.49.121:22"" } to retry, use: --limit @/home/robert.nanney/db_acl_test.retry PLAY RECAP ********************************************************************* 10.127.49.121 : ok=2 changed=0 unreachable=0 failed=1 ``` ",1,issue connecting when using nxos acl issue type bug report component name nxos acl ansible version ansible config file home robert nanney ansible cfg configured module search path default w o overrides configuration ask pass false os environment running from ubuntu gnu linux generic managing cisco chassis summary can not connect with transport cli to the device steps to reproduce run a task with nxos acl with the transport flag set to cli hosts all connection local gather facts false vars cli transport cli tasks name show hostname nxos command commands show hostname host inventory hostname provider cli register hostname name debug hostname debug var hostname name db acl nxos acl name cloud dbaas snet in seq action permit proto tcp src any dest any state present host inventory hostname provider cli expected results i expected a connection to the switch actual results ansible playbook db acl test yml i vvvv using home robert nanney ansible cfg as config file loading callback plugin default of type stdout from usr local lib dist packages ansible plugins callback init pyc playbook db acl test yml plays in db acl test yml play task task path home robert nanney db acl test yml using module file usr local lib dist packages ansible modules core network nxos nxos command py establish local connection for user robert nanney exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpffkxca to home robert nanney ansible tmp ansible tmp nxos command py exec bin sh c chmod u x home robert nanney ansible tmp ansible tmp home robert nanney ansible tmp ansible tmp nxos command py sleep exec bin sh c usr bin python home robert nanney ansible tmp ansible tmp nxos command py rm rf home robert nanney ansible tmp ansible tmp dev null sleep ok changed false invocation module args auth pass null authorize false commands show hostname host interval match all password value specified in no log parameter port null provider transport cli retries ssh keyfile null timeout transport cli use ssl false username admin validate certs true wait for null module name nxos command stdout n stdout lines warnings task task path home robert nanney db acl test yml ok hostname changed false stdout n stdout lines warnings task task path home robert nanney db acl test yml using module file usr local lib dist packages ansible modules core network nxos nxos acl py establish local connection for user robert nanney exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home robert nanney ansible tmp ansible tmp nxos acl py exec bin sh c chmod u x home robert nanney ansible tmp ansible tmp home robert nanney ansible tmp ansible tmp nxos acl py sleep exec bin sh c usr bin python home robert nanney ansible tmp ansible tmp nxos acl py rm rf home robert nanney ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args ack null action permit auth pass null authorize false config null dest any dest null dest null dest port op null dscp null established null fin null fragments null host include defaults false log null name cloud dbaas snet in password null port null precedence null proto tcp protocol http provider transport cli psh null remark null rst null save false seq src any src null src null src port op null ssh keyfile null state present syn null time range null timeout transport cli urg null use ssl false username null validate certs true module name nxos acl msg failed to connect to to retry use limit home robert nanney db acl test retry play recap ok changed unreachable failed ,1 1884,6577516579.0,IssuesEvent,2017-09-12 01:27:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_metric_alarm always shows changes,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_metric_alarm ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OSX El Capitan ##### SUMMARY When running this module with the same inputs, ansible reports changes. This module should be idempotent. ##### STEPS TO REPRODUCE https://gist.github.com/dmcnaught/e06f2230c0cbcbdf6329 gist also shows Alarm history from cloudwatch on 2 consecutive runs (seems to show just yaml order differences - could that be the problem? ##### EXPECTED RESULTS No changes on subsequent ansible runs ##### ACTUAL RESULTS Changes reported: ``` TASK [kube-up-mods : Configure Metric Alarms and link to Scaling Policies] ***** changed: [localhost] => (item={u'threshold': 50.0, u'comparison': u'>=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6da1ab4d-fca4-4f0b-9c53-ca9b582ba5da:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Increase Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleUp'}) changed: [localhost] => (item={u'threshold': 20.0, u'comparison': u'<=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6ffb0797-d089-4f2a-a1b7-16da6bb42de1:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Decrease Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleDown'}) ``` ",True,"ec2_metric_alarm always shows changes - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_metric_alarm ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OSX El Capitan ##### SUMMARY When running this module with the same inputs, ansible reports changes. This module should be idempotent. ##### STEPS TO REPRODUCE https://gist.github.com/dmcnaught/e06f2230c0cbcbdf6329 gist also shows Alarm history from cloudwatch on 2 consecutive runs (seems to show just yaml order differences - could that be the problem? ##### EXPECTED RESULTS No changes on subsequent ansible runs ##### ACTUAL RESULTS Changes reported: ``` TASK [kube-up-mods : Configure Metric Alarms and link to Scaling Policies] ***** changed: [localhost] => (item={u'threshold': 50.0, u'comparison': u'>=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6da1ab4d-fca4-4f0b-9c53-ca9b582ba5da:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Increase Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleUp'}) changed: [localhost] => (item={u'threshold': 20.0, u'comparison': u'<=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6ffb0797-d089-4f2a-a1b7-16da6bb42de1:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Decrease Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleDown'}) ``` ",1, metric alarm always shows changes issue type bug report component name metric alarm ansible version ansible configuration os environment osx el capitan summary when running this module with the same inputs ansible reports changes this module should be idempotent steps to reproduce gist also shows alarm history from cloudwatch on consecutive runs seems to show just yaml order differences could that be the problem expected results no changes on subsequent ansible runs actual results changes reported task changed item u threshold u comparison u u alarm actions u name u hermes minion group us east scaleup changed item u threshold u comparison u u alarm actions u name u hermes minion group us east scaledown ,1 865,4534587162.0,IssuesEvent,2016-09-08 15:00:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apache2_module fails for php7.0 on Ubuntu Xenial,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 982db58aff) last updated 2016/09/08 11:50:49 (GMT +100) lib/ansible/modules/core: (detached HEAD db38f0c876) last updated 2016/09/08 13:03:40 (GMT +100) lib/ansible/modules/extras: (detached HEAD 8bfdcfcab2) last updated 2016/09/08 11:51:00 (GMT +100) config file = /home/rowan/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ubuntu Xenial lists the php7.0 module as php7_module when running apache2ctl -M this breaks the regexp checking if the module is enabled. I've made a work around here https://github.com/rwky/ansible-modules-core/commit/00ad6ef035a10dac7c84b7b68f04b00a739b104b but I didn't make a PR since I expect it may break other distros/versions. Not entirely sure what the best solution to this is. ##### STEPS TO REPRODUCE Run apache2_module with name=php7.0 state=present on a xenial server. ",True,"apache2_module fails for php7.0 on Ubuntu Xenial - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 982db58aff) last updated 2016/09/08 11:50:49 (GMT +100) lib/ansible/modules/core: (detached HEAD db38f0c876) last updated 2016/09/08 13:03:40 (GMT +100) lib/ansible/modules/extras: (detached HEAD 8bfdcfcab2) last updated 2016/09/08 11:51:00 (GMT +100) config file = /home/rowan/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ubuntu Xenial lists the php7.0 module as php7_module when running apache2ctl -M this breaks the regexp checking if the module is enabled. I've made a work around here https://github.com/rwky/ansible-modules-core/commit/00ad6ef035a10dac7c84b7b68f04b00a739b104b but I didn't make a PR since I expect it may break other distros/versions. Not entirely sure what the best solution to this is. ##### STEPS TO REPRODUCE Run apache2_module with name=php7.0 state=present on a xenial server. ",1, module fails for on ubuntu xenial issue type bug report component name module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home rowan ansible cfg configured module search path default w o overrides configuration n a os environment n a summary ubuntu xenial lists the module as module when running m this breaks the regexp checking if the module is enabled i ve made a work around here but i didn t make a pr since i expect it may break other distros versions not entirely sure what the best solution to this is steps to reproduce run module with name state present on a xenial server ,1 1751,6574956800.0,IssuesEvent,2017-09-11 14:36:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git module always fails on update if local has modification,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` 2.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT debian:8.0 jessie ##### SUMMARY If local git repository has modification, an update attempt of always fails with Local modifications exist, even if force=yes was given. ##### STEPS TO REPRODUCE ``` tasks: - name: update project dependency git: dest={{item.location|quote}} repo={{item.scm_url|quote}} version={{item.scm_revision|quote}} force=yes refspec={{item.scm_refspec}} accept_hostkey=yes with_items: ""{{ deps }}"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` failed: [127.0.0.1] => { ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""item"": { ""location"": ""/opt/tiger/neihan/conf"", ""name"": ""neihan/conf"", ""scm_refspec"": ""refs/heads/master"", ""scm_revision"": ""master"", ""scm_url"": ""ssh://*********/neihan/conf"" }, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 1023, in \r\n main()\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 974, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",True,"git module always fails on update if local has modification - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` 2.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT debian:8.0 jessie ##### SUMMARY If local git repository has modification, an update attempt of always fails with Local modifications exist, even if force=yes was given. ##### STEPS TO REPRODUCE ``` tasks: - name: update project dependency git: dest={{item.location|quote}} repo={{item.scm_url|quote}} version={{item.scm_revision|quote}} force=yes refspec={{item.scm_refspec}} accept_hostkey=yes with_items: ""{{ deps }}"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` failed: [127.0.0.1] => { ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""item"": { ""location"": ""/opt/tiger/neihan/conf"", ""name"": ""neihan/conf"", ""scm_refspec"": ""refs/heads/master"", ""scm_revision"": ""master"", ""scm_url"": ""ssh://*********/neihan/conf"" }, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 1023, in \r\n main()\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 974, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",1,git module always fails on update if local has modification issue type bug report component name git ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific debian jessie summary if local git repository has modification an update attempt of always fails with local modifications exist even if force yes was given steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used tasks name update project dependency git dest item location quote repo item scm url quote version item scm revision quote force yes refspec item scm refspec accept hostkey yes with items deps expected results actual results failed failed true invocation module name git item location opt tiger neihan conf name neihan conf scm refspec refs heads master scm revision master scm url ssh neihan conf module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module git py line in r n main r n file tmp ansible ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure ,1 1016,4803535415.0,IssuesEvent,2016-11-02 10:26:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apache2_module fails for libapache2-mod-proxy-uwsgi,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` $ ./ansible --version ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY A playbook that was working with ansible 2.1.2.0 started failing when 2.2.0.0 was released. The playbook: 1) apt installs a bunch of packages, including apache2, libapache2-mod-proxy-html, and libapache2-mod-proxy-uwsgi, uwsgi, etc. 2) Uses apache2_module to enable proxy_http #### Analysis The apache2_module module was changed to use `apache2ctl -M` to list modules in 2.2.0.0, which fails with the playbook due to an oddity of the Ubuntu packages: - libapache2-mod-proxy-html : installs the modules disabled. - libapache2-mod-proxy-uwsgi : installs the module enabled. But, mod_proxy_uwsgi requires mod_proxy_http to be enabled, and since mod_proxy_uwsgi is enabled but mod_proxy_http is disabled, `apache2ctl -M` fails with a confusing error (see output below) See https://github.com/ansible/ansible-modules-core/pull/2417 for the change that added use of `apache2ctl -M`. A potential fix is to provide special handling for mod_proxy_uwsgi so that it doesn't use `apache2ctl -M` or something. ##### STEPS TO REPRODUCE ``` - name: Install packages become: true apt: name={{ item }} with_items: - apache2 - git - libapache2-mod-proxy-html - libapache2-mod-proxy-uwsgi - python-dev - python-pip - python-virtualenv - uwsgi-emperor - uwsgi-plugin-python - name: Enable modules for mod_proxy_uwsgi become: true apache2_module: name={{ item }} with_items: - proxy_http notify: - Restart httpd ``` ##### EXPECTED RESULTS Expected the old playbook to work. ##### ACTUAL RESULTS ``` failed: [default] (item=proxy_http) => { ""failed"": true, ""invocation"": { ""module_args"": { ""force"": false, ""name"": ""proxy_http"", ""state"": ""present"" }, ""module_name"": ""apache2_module"" }, ""item"": ""proxy_http"", ""msg"": ""Error executing /usr/sbin/apache2ctl: apache2: Syntax error on line 140 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/proxy_uwsgi.load: Cannot load /usr/lib/apache2/modules/mod_proxy_uwsgi.so into server: /usr/lib/apache2/modules/mod_proxy_uwsgi.so: undefined symbol: ap_proxy_backend_broke\n"" } ``` As you can see the error isn't very useful as far as figuring out the problem. ##### Workaround The following steps worked around the issue: ``` - name: Install packages become: true apt: name={{ item }} with_items: - apache2 - git - libapache2-mod-proxy-html - python-dev - python-pip - python-virtualenv - uwsgi-emperor - uwsgi-plugin-python - name: Enable modules for mod_proxy_uwsgi become: true apache2_module: name={{ item }} with_items: - proxy_http notify: - Restart httpd - name: Install mod-proxy-uwsgi become: true apt: name={{ item }} with_items: - libapache2-mod-proxy-uwsgi notify: - Restart httpd ```",True,"apache2_module fails for libapache2-mod-proxy-uwsgi - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` $ ./ansible --version ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY A playbook that was working with ansible 2.1.2.0 started failing when 2.2.0.0 was released. The playbook: 1) apt installs a bunch of packages, including apache2, libapache2-mod-proxy-html, and libapache2-mod-proxy-uwsgi, uwsgi, etc. 2) Uses apache2_module to enable proxy_http #### Analysis The apache2_module module was changed to use `apache2ctl -M` to list modules in 2.2.0.0, which fails with the playbook due to an oddity of the Ubuntu packages: - libapache2-mod-proxy-html : installs the modules disabled. - libapache2-mod-proxy-uwsgi : installs the module enabled. But, mod_proxy_uwsgi requires mod_proxy_http to be enabled, and since mod_proxy_uwsgi is enabled but mod_proxy_http is disabled, `apache2ctl -M` fails with a confusing error (see output below) See https://github.com/ansible/ansible-modules-core/pull/2417 for the change that added use of `apache2ctl -M`. A potential fix is to provide special handling for mod_proxy_uwsgi so that it doesn't use `apache2ctl -M` or something. ##### STEPS TO REPRODUCE ``` - name: Install packages become: true apt: name={{ item }} with_items: - apache2 - git - libapache2-mod-proxy-html - libapache2-mod-proxy-uwsgi - python-dev - python-pip - python-virtualenv - uwsgi-emperor - uwsgi-plugin-python - name: Enable modules for mod_proxy_uwsgi become: true apache2_module: name={{ item }} with_items: - proxy_http notify: - Restart httpd ``` ##### EXPECTED RESULTS Expected the old playbook to work. ##### ACTUAL RESULTS ``` failed: [default] (item=proxy_http) => { ""failed"": true, ""invocation"": { ""module_args"": { ""force"": false, ""name"": ""proxy_http"", ""state"": ""present"" }, ""module_name"": ""apache2_module"" }, ""item"": ""proxy_http"", ""msg"": ""Error executing /usr/sbin/apache2ctl: apache2: Syntax error on line 140 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/proxy_uwsgi.load: Cannot load /usr/lib/apache2/modules/mod_proxy_uwsgi.so into server: /usr/lib/apache2/modules/mod_proxy_uwsgi.so: undefined symbol: ap_proxy_backend_broke\n"" } ``` As you can see the error isn't very useful as far as figuring out the problem. ##### Workaround The following steps worked around the issue: ``` - name: Install packages become: true apt: name={{ item }} with_items: - apache2 - git - libapache2-mod-proxy-html - python-dev - python-pip - python-virtualenv - uwsgi-emperor - uwsgi-plugin-python - name: Enable modules for mod_proxy_uwsgi become: true apache2_module: name={{ item }} with_items: - proxy_http notify: - Restart httpd - name: Install mod-proxy-uwsgi become: true apt: name={{ item }} with_items: - libapache2-mod-proxy-uwsgi notify: - Restart httpd ```",1, module fails for mod proxy uwsgi issue type bug report component name module ansible version ansible version ansible config file configured module search path default w o overrides configuration none os environment ubuntu summary a playbook that was working with ansible started failing when was released the playbook apt installs a bunch of packages including mod proxy html and mod proxy uwsgi uwsgi etc uses module to enable proxy http analysis the module module was changed to use m to list modules in which fails with the playbook due to an oddity of the ubuntu packages mod proxy html installs the modules disabled mod proxy uwsgi installs the module enabled but mod proxy uwsgi requires mod proxy http to be enabled and since mod proxy uwsgi is enabled but mod proxy http is disabled m fails with a confusing error see output below see for the change that added use of m a potential fix is to provide special handling for mod proxy uwsgi so that it doesn t use m or something steps to reproduce name install packages become true apt name item with items git mod proxy html mod proxy uwsgi python dev python pip python virtualenv uwsgi emperor uwsgi plugin python name enable modules for mod proxy uwsgi become true module name item with items proxy http notify restart httpd expected results expected the old playbook to work actual results failed item proxy http failed true invocation module args force false name proxy http state present module name module item proxy http msg error executing usr sbin syntax error on line of etc conf syntax error on line of etc mods enabled proxy uwsgi load cannot load usr lib modules mod proxy uwsgi so into server usr lib modules mod proxy uwsgi so undefined symbol ap proxy backend broke n as you can see the error isn t very useful as far as figuring out the problem workaround the following steps worked around the issue name install packages become true apt name item with items git mod proxy html python dev python pip python virtualenv uwsgi emperor uwsgi plugin python name enable modules for mod proxy uwsgi become true module name item with items proxy http notify restart httpd name install mod proxy uwsgi become true apt name item with items mod proxy uwsgi notify restart httpd ,1 1888,6577527569.0,IssuesEvent,2017-09-12 01:32:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Docker pull: always and state: reloaded redeploys named containers every time,affects_1.9 bug_report cloud docker waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: Docker ##### Ansible Version: Doesn't work in 1.9.1 to 2.0.0.2 ##### Ansible Configuration: ``` [defaults] host_key_checking=False display_skipped_hosts=False force_handlers = True hostfile = inventory/ec2.py retry_files_enabled = False [ssh_connection] pipelining=True ``` ##### Environment: Ubuntu 14.04 from OSX 10.10 ##### Summary: I'm not sure if it ever worked, or if I just imagined it worked, but according to the documentation: `""reloaded"" (added in Ansible 1.9) asserts that all matching containers are running and restarts any that have any images or configuration out of date.` This does not seem to be the case, as named containers that have nothing changed, will be reloaded every time. I'm almost positive this was properly working at some point so if someone could try it out to see if maybe it's just something with my setup, that would be great :smile: ##### Steps To Reproduce: ``` yaml - name: create redis container docker: name: redis-test image: ""redis:3.0.3"" pull: always state: reloaded ``` ##### Expected Results: When a container already exists and it has all the same settings except the dynamically assigned name is different, nothing should happen: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* ok: [x.x.x.x] ``` ##### Actual Results: It will create a new, separate container every time: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Debug output (v1.9.1):** ``` yaml changed: [54.209.183.233] => { ""ansible_facts"": { ""docker_containers"": [{ ""Id"": ""ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e"", ""Warnings"": null }] }, ""changed"": true, ""containers"": [{ ""Id"": ""ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e"", ""Warnings"": null }], ""msg"": ""started 1 container, created 1 container."", ""reload_reasons"": null, <<<---- HUH? if there are no reasons to reload, then why does it happen? ""summary"": { ""created"": 1, ""killed"": 0, ""pulled"": 0, ""removed"": 0, ""restarted"": 0, ""started"": 1, ""stopped"": 0 } } ``` **Debug output (v2.0.0.2):** `""reload_reasons"": ""net (default => bridge)""` Weird... looking into it This seems to be very similar to https://github.com/ansible/ansible-modules-core/issues/3219, but it is NOT fixed by removing the relevant commit for that issue ",True,"Docker pull: always and state: reloaded redeploys named containers every time - ##### Issue Type: - Bug Report ##### Plugin Name: Docker ##### Ansible Version: Doesn't work in 1.9.1 to 2.0.0.2 ##### Ansible Configuration: ``` [defaults] host_key_checking=False display_skipped_hosts=False force_handlers = True hostfile = inventory/ec2.py retry_files_enabled = False [ssh_connection] pipelining=True ``` ##### Environment: Ubuntu 14.04 from OSX 10.10 ##### Summary: I'm not sure if it ever worked, or if I just imagined it worked, but according to the documentation: `""reloaded"" (added in Ansible 1.9) asserts that all matching containers are running and restarts any that have any images or configuration out of date.` This does not seem to be the case, as named containers that have nothing changed, will be reloaded every time. I'm almost positive this was properly working at some point so if someone could try it out to see if maybe it's just something with my setup, that would be great :smile: ##### Steps To Reproduce: ``` yaml - name: create redis container docker: name: redis-test image: ""redis:3.0.3"" pull: always state: reloaded ``` ##### Expected Results: When a container already exists and it has all the same settings except the dynamically assigned name is different, nothing should happen: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* ok: [x.x.x.x] ``` ##### Actual Results: It will create a new, separate container every time: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Debug output (v1.9.1):** ``` yaml changed: [54.209.183.233] => { ""ansible_facts"": { ""docker_containers"": [{ ""Id"": ""ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e"", ""Warnings"": null }] }, ""changed"": true, ""containers"": [{ ""Id"": ""ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e"", ""Warnings"": null }], ""msg"": ""started 1 container, created 1 container."", ""reload_reasons"": null, <<<---- HUH? if there are no reasons to reload, then why does it happen? ""summary"": { ""created"": 1, ""killed"": 0, ""pulled"": 0, ""removed"": 0, ""restarted"": 0, ""started"": 1, ""stopped"": 0 } } ``` **Debug output (v2.0.0.2):** `""reload_reasons"": ""net (default => bridge)""` Weird... looking into it This seems to be very similar to https://github.com/ansible/ansible-modules-core/issues/3219, but it is NOT fixed by removing the relevant commit for that issue ",1,docker pull always and state reloaded redeploys named containers every time issue type bug report plugin name docker ansible version doesn t work in to ansible configuration host key checking false display skipped hosts false force handlers true hostfile inventory py retry files enabled false pipelining true environment ubuntu from osx summary i m not sure if it ever worked or if i just imagined it worked but according to the documentation reloaded added in ansible asserts that all matching containers are running and restarts any that have any images or configuration out of date this does not seem to be the case as named containers that have nothing changed will be reloaded every time i m almost positive this was properly working at some point so if someone could try it out to see if maybe it s just something with my setup that would be great smile steps to reproduce yaml name create redis container docker name redis test image redis pull always state reloaded expected results when a container already exists and it has all the same settings except the dynamically assigned name is different nothing should happen first run gathering facts ok task changed second run gathering facts ok task ok actual results it will create a new separate container every time first run gathering facts ok task changed second run gathering facts ok task changed debug output yaml changed ansible facts docker containers id warnings null changed true containers id warnings null msg started container created container reload reasons null huh if there are no reasons to reload then why does it happen summary created killed pulled removed restarted started stopped debug output reload reasons net default bridge weird looking into it this seems to be very similar to but it is not fixed by removing the relevant commit for that issue ,1 757,4351957550.0,IssuesEvent,2016-08-01 03:16:11,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Mount task skipping,bug_report waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME mount ##### ANSIBLE VERSION 1.9.1 ##### SUMMARY I'm having an issue with a mount task to add an NFS target. I wasn't able to find any real documentation on doing this, so I'm not sure if it's supported at all. The mount command doesn't fail, or provide an error even with -vvvv so I'm not sure what it's doing or what's going wrong. I'm running Ansible 1.9.1 Here is the task: - mount: ""name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present"" I have also tried calling it as an action instead of mount, but the results were the same. Here is the output: TASK: [webservers | mount name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present] *** ESTABLISH CONNECTION FOR USER: root REMOTE_MODULE mount name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', ""/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730'""] ESTABLISH CONNECTION FOR USER: root REMOTE_MODULE mount name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', ""/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819'""] PUT /tmp/tmppujJbA TO /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount PUT /tmp/tmpWlK9Sh TO /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', ""/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/ >/dev/null 2>&1'""] EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', ""/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/ >/dev/null 2>&1'""]",True,"Mount task skipping - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME mount ##### ANSIBLE VERSION 1.9.1 ##### SUMMARY I'm having an issue with a mount task to add an NFS target. I wasn't able to find any real documentation on doing this, so I'm not sure if it's supported at all. The mount command doesn't fail, or provide an error even with -vvvv so I'm not sure what it's doing or what's going wrong. I'm running Ansible 1.9.1 Here is the task: - mount: ""name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present"" I have also tried calling it as an action instead of mount, but the results were the same. Here is the output: TASK: [webservers | mount name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present] *** ESTABLISH CONNECTION FOR USER: root REMOTE_MODULE mount name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', ""/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730'""] ESTABLISH CONNECTION FOR USER: root REMOTE_MODULE mount name=/var/shared_files src=':/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', ""/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819'""] PUT /tmp/tmppujJbA TO /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount PUT /tmp/tmpWlK9Sh TO /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', ""/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/ >/dev/null 2>&1'""] EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', ""/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/ >/dev/null 2>&1'""]",1,mount task skipping issue type bug report component name mount ansible version summary i m having an issue with a mount task to add an nfs target i wasn t able to find any real documentation on doing this so i m not sure if it s supported at all the mount command doesn t fail or provide an error even with vvvv so i m not sure what it s doing or what s going wrong i m running ansible here is the task mount name var shared files src var shared files fstype nfs opts defaults noatime netdev state present i have also tried calling it as an action instead of mount but the results were the same here is the output task establish connection for user root remote module mount name var shared files src var shared files fstype nfs opts defaults noatime netdev state present checkmode true exec establish connection for user root remote module mount name var shared files src var shared files fstype nfs opts defaults noatime netdev state present checkmode true exec put tmp tmppujjba to root ansible tmp ansible tmp mount put tmp to root ansible tmp ansible tmp mount exec exec ,1 1700,6574386170.0,IssuesEvent,2017-09-11 12:42:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,rds param group module to enable logging ,affects_2.3 bug_report waiting_on_maintainer,"There is no option to enable logging by default ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME rds_param_group_module ##### SUMMARY There is no option to change logging behaviour on RDSs ",True,"rds param group module to enable logging - There is no option to enable logging by default ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME rds_param_group_module ##### SUMMARY There is no option to change logging behaviour on RDSs ",1,rds param group module to enable logging there is no option to enable logging by default issue type bug report feature idea documentation report component name rds param group module summary there is no option to change logging behaviour on rdss ,1 1669,6574071177.0,IssuesEvent,2017-09-11 11:21:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,EC2 instance_initiated_shutdown_behavior default should be more intelligent,affects_2.2 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME EC2 ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When creating an instance-store backed instance, the default of ""stop"" on instance_initiated_shutdown_behavior is not a valid default. Explicitly setting to ""terminate"" resolves the issue. ##### STEPS TO REPRODUCE Running a playbook with the following task ``` - name: create web servers instance 1 ec2: image: ""{{ image }}"" instance_type: ""{{ instance_type }}"" keypair: ""{{ keypair }}"" instance_tags: Name: ""{{ role }}-v2-01"" service: ""tn"" region: ""{{ region }}"" zone: ""{{ region }}a"" group: ""{{ aws_security_group }}"" wait: true monitoring: no exact_count: 1 count_tag: Name: ""{{ role }}-v2-01"" register: ec2_info ``` ##### EXPECTED RESULTS Instance to be created ##### ACTUAL RESULTS Playbook bombs out ``` fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 1, ""count_tag"": ""{'Name': 'talis-com-server-v2-01'}"", ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": 1, ""group"": [ ""staging"", ""talis.com-auto-v2"" ], ""group_id"": null, ""id"": null, ""image"": ""ami-bd36b0ce"", ""instance_ids"": null, ""instance_initiated_shutdown_behavior"": null, ""instance_profile_name"": null, ""instance_tags"": { ""Name"": ""talis-com-server-v2-01"", ""service"": ""tn"" }, ""instance_type"": ""m1.small"", ""kernel"": null, ""key_name"": ""keypair"", ""keypair"": ""keypair"", ""monitoring"": false, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""eu-west-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": null, ""user_data"": null, ""validate_certs"": true, ""volumes"": null, ""vpc_subnet_id"": null, ""wait"": true, ""wait_timeout"": ""300"", ""zone"": ""eu-west-1a"" }, ""module_name"": ""ec2"" }, ""msg"": ""Instance creation failed => InvalidParameterCombination: The attribute instanceInitiatedShutdownBehavior can only be used for EBS-backed images."" } ``` ",True,"EC2 instance_initiated_shutdown_behavior default should be more intelligent - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME EC2 ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When creating an instance-store backed instance, the default of ""stop"" on instance_initiated_shutdown_behavior is not a valid default. Explicitly setting to ""terminate"" resolves the issue. ##### STEPS TO REPRODUCE Running a playbook with the following task ``` - name: create web servers instance 1 ec2: image: ""{{ image }}"" instance_type: ""{{ instance_type }}"" keypair: ""{{ keypair }}"" instance_tags: Name: ""{{ role }}-v2-01"" service: ""tn"" region: ""{{ region }}"" zone: ""{{ region }}a"" group: ""{{ aws_security_group }}"" wait: true monitoring: no exact_count: 1 count_tag: Name: ""{{ role }}-v2-01"" register: ec2_info ``` ##### EXPECTED RESULTS Instance to be created ##### ACTUAL RESULTS Playbook bombs out ``` fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 1, ""count_tag"": ""{'Name': 'talis-com-server-v2-01'}"", ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": 1, ""group"": [ ""staging"", ""talis.com-auto-v2"" ], ""group_id"": null, ""id"": null, ""image"": ""ami-bd36b0ce"", ""instance_ids"": null, ""instance_initiated_shutdown_behavior"": null, ""instance_profile_name"": null, ""instance_tags"": { ""Name"": ""talis-com-server-v2-01"", ""service"": ""tn"" }, ""instance_type"": ""m1.small"", ""kernel"": null, ""key_name"": ""keypair"", ""keypair"": ""keypair"", ""monitoring"": false, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""eu-west-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": null, ""user_data"": null, ""validate_certs"": true, ""volumes"": null, ""vpc_subnet_id"": null, ""wait"": true, ""wait_timeout"": ""300"", ""zone"": ""eu-west-1a"" }, ""module_name"": ""ec2"" }, ""msg"": ""Instance creation failed => InvalidParameterCombination: The attribute instanceInitiatedShutdownBehavior can only be used for EBS-backed images."" } ``` ",1, instance initiated shutdown behavior default should be more intelligent issue type bug report component name ansible version ansible config file etc ansible ansible cfg configured module search path os environment n a summary when creating an instance store backed instance the default of stop on instance initiated shutdown behavior is not a valid default explicitly setting to terminate resolves the issue steps to reproduce running a playbook with the following task name create web servers instance image image instance type instance type keypair keypair instance tags name role service tn region region zone region a group aws security group wait true monitoring no exact count count tag name role register info expected results instance to be created actual results playbook bombs out fatal failed changed false failed true invocation module args assign public ip false aws access key null aws secret key null count count tag name talis com server ebs optimized false url null exact count group staging talis com auto group id null id null image ami instance ids null instance initiated shutdown behavior null instance profile name null instance tags name talis com server service tn instance type small kernel null key name keypair keypair keypair monitoring false network interfaces null placement group null private ip null profile null ramdisk null region eu west security token null source dest check true spot launch group null spot price null spot type one time spot wait timeout state present tenancy default termination protection null user data null validate certs true volumes null vpc subnet id null wait true wait timeout zone eu west module name msg instance creation failed invalidparametercombination the attribute instanceinitiatedshutdownbehavior can only be used for ebs backed images ,1 1414,6155312387.0,IssuesEvent,2017-06-28 14:32:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_module volumes fails,affects_1.9 bug_report cloud docker waiting_on_maintainer,"#### Issue Type: Bug Report #### Component Name _docker module #### Ansible Version: 1.9.2 ### Ansible Configuration ``` [defaults] inventory = ./hosts private_key_file = /Users/vagrant/.vagrant.d/insecure_private_key remote_user = vagrant host_key_checking = False ask_sudo_pass: yes ``` #### Docker.py version 1.2.3 #### Environment Running from: OS X 10.9.5 Managing: Ubuntu 15.04 #### Summary: volumes in docker_module fails to bind to the correct path on host machine #### Steps to reproduce This playbook fails ``` - name: Create Jenkins data directory on host file: ""path=/home/{{ansible_ssh_user}}/.jenkins state=directory mode=0777"" - name: Jenkins data directory docker: name: jenkins_data image: busybox state: present volumes: ""/home/{{ansible_ssh_user}}/.jenkins:/var/jenkins_home"" sudo: yes - name: Start Jenkins container docker: name: jenkins_master image: mediaplayout/jenkins volumes_from: jenkins_data pull: always state: started ports: - 8080:8080 - 50000:50000 sudo: yes ``` whilst this playbook correctly attachs the volume at ~/.jenkins: ``` - name: Create Jenkins data directory on host file: ""path=/home/{{ansible_ssh_user}}/.jenkins state=directory mode=0777"" - name: Jenkins data directory command: ""docker create -v {{ansible_env.HOME}}/.jenkins:/var/jenkins_home --name jenkins_data busybox"" sudo: yes - name: Start Jenkins container docker: name: jenkins_master image: mediaplayout/jenkins volumes_from: jenkins_data pull: always state: started ports: - 8080:8080 - 50000:50000 sudo: yes ``` #### Expected Results ``` ls /home/vagrant/.jenkins ``` should show the data volume of the Jenkins instance running within container ",True,"docker_module volumes fails - #### Issue Type: Bug Report #### Component Name _docker module #### Ansible Version: 1.9.2 ### Ansible Configuration ``` [defaults] inventory = ./hosts private_key_file = /Users/vagrant/.vagrant.d/insecure_private_key remote_user = vagrant host_key_checking = False ask_sudo_pass: yes ``` #### Docker.py version 1.2.3 #### Environment Running from: OS X 10.9.5 Managing: Ubuntu 15.04 #### Summary: volumes in docker_module fails to bind to the correct path on host machine #### Steps to reproduce This playbook fails ``` - name: Create Jenkins data directory on host file: ""path=/home/{{ansible_ssh_user}}/.jenkins state=directory mode=0777"" - name: Jenkins data directory docker: name: jenkins_data image: busybox state: present volumes: ""/home/{{ansible_ssh_user}}/.jenkins:/var/jenkins_home"" sudo: yes - name: Start Jenkins container docker: name: jenkins_master image: mediaplayout/jenkins volumes_from: jenkins_data pull: always state: started ports: - 8080:8080 - 50000:50000 sudo: yes ``` whilst this playbook correctly attachs the volume at ~/.jenkins: ``` - name: Create Jenkins data directory on host file: ""path=/home/{{ansible_ssh_user}}/.jenkins state=directory mode=0777"" - name: Jenkins data directory command: ""docker create -v {{ansible_env.HOME}}/.jenkins:/var/jenkins_home --name jenkins_data busybox"" sudo: yes - name: Start Jenkins container docker: name: jenkins_master image: mediaplayout/jenkins volumes_from: jenkins_data pull: always state: started ports: - 8080:8080 - 50000:50000 sudo: yes ``` #### Expected Results ``` ls /home/vagrant/.jenkins ``` should show the data volume of the Jenkins instance running within container ",1,docker module volumes fails issue type bug report component name docker module ansible version ansible configuration inventory hosts private key file users vagrant vagrant d insecure private key remote user vagrant host key checking false ask sudo pass yes docker py version environment running from os x managing ubuntu summary volumes in docker module fails to bind to the correct path on host machine steps to reproduce this playbook fails name create jenkins data directory on host file path home ansible ssh user jenkins state directory mode name jenkins data directory docker name jenkins data image busybox state present volumes home ansible ssh user jenkins var jenkins home sudo yes name start jenkins container docker name jenkins master image mediaplayout jenkins volumes from jenkins data pull always state started ports sudo yes whilst this playbook correctly attachs the volume at jenkins name create jenkins data directory on host file path home ansible ssh user jenkins state directory mode name jenkins data directory command docker create v ansible env home jenkins var jenkins home name jenkins data busybox sudo yes name start jenkins container docker name jenkins master image mediaplayout jenkins volumes from jenkins data pull always state started ports sudo yes expected results ls home vagrant jenkins should show the data volume of the jenkins instance running within container ,1 1766,6575023003.0,IssuesEvent,2017-09-11 14:48:22,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,'azure_rm_subnet' with 'state: absent' fails when subnet was already not existing,affects_2.1 azure bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_subnet ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/techraf/devops/infra-azure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Debian Jessie ##### SUMMARY `azure_rm_subnet` requires the parameter `virtual_network_name` to be provided, otherwise: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""missing required arguments: virtual_network_name""} ``` **But** running the module with `state: absent` fails when the virtual network specified in `virtual_network_name` is non-existent. ##### STEPS TO REPRODUCE Delete the virtual_network then run the `azure_rm_subnet` task with `state: absent`. ``` - name: Ensure subnet does not exist azure_rm_subnet: resource_group: Testing name: subnet001 state: absent virtual_network_name: testvn001 ``` ##### EXPECTED RESULTS ``` ok: [localhost] ``` ##### ACTUAL RESULTS ``` TASK [Ensure subnet does not exist] ******************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error deleting subnet subnet001 - The Resource 'Microsoft.Network/virtualNetworks/testvn001' under resource group 'Testing' was not found.""} ``` ",True,"'azure_rm_subnet' with 'state: absent' fails when subnet was already not existing - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_subnet ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/techraf/devops/infra-azure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Debian Jessie ##### SUMMARY `azure_rm_subnet` requires the parameter `virtual_network_name` to be provided, otherwise: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""missing required arguments: virtual_network_name""} ``` **But** running the module with `state: absent` fails when the virtual network specified in `virtual_network_name` is non-existent. ##### STEPS TO REPRODUCE Delete the virtual_network then run the `azure_rm_subnet` task with `state: absent`. ``` - name: Ensure subnet does not exist azure_rm_subnet: resource_group: Testing name: subnet001 state: absent virtual_network_name: testvn001 ``` ##### EXPECTED RESULTS ``` ok: [localhost] ``` ##### ACTUAL RESULTS ``` TASK [Ensure subnet does not exist] ******************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error deleting subnet subnet001 - The Resource 'Microsoft.Network/virtualNetworks/testvn001' under resource group 'Testing' was not found.""} ``` ",1, azure rm subnet with state absent fails when subnet was already not existing issue type bug report component name azure rm subnet ansible version ansible config file home techraf devops infra azure ansible cfg configured module search path default w o overrides configuration os environment debian jessie summary azure rm subnet requires the parameter virtual network name to be provided otherwise fatal failed changed false failed true msg missing required arguments virtual network name but running the module with state absent fails when the virtual network specified in virtual network name is non existent steps to reproduce delete the virtual network then run the azure rm subnet task with state absent name ensure subnet does not exist azure rm subnet resource group testing name state absent virtual network name expected results ok actual results task fatal failed changed false failed true msg error deleting subnet the resource microsoft network virtualnetworks under resource group testing was not found ,1 1137,4998875105.0,IssuesEvent,2016-12-09 21:19:22,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ec2_group requires ""description"" to delete SG",affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY `ec2_group` requires `description` parameter to be specified to delete a security group. This doesn't make sense, IMHO, and it seems to contradict to [what Boto requires](http://boto.cloudhackers.com/en/latest/ref/ec2.html#boto.ec2.connection.EC2Connection.delete_security_group). ##### STEPS TO REPRODUCE Tried to execute the following task: ``` - name: ""Destroy Security Group for FE trusted ELB"" ec2_group: state: absent region: ""{{ region }}"" name: ""{{ owner }}_sg_{{ env }}_fe_elb_trst"" ``` ##### EXPECTED RESULTS Expected that the above task would execute without error. ##### ACTUAL RESULTS The above task raised an error: ``` missing required arguments: description ``` ",True,"ec2_group requires ""description"" to delete SG - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY `ec2_group` requires `description` parameter to be specified to delete a security group. This doesn't make sense, IMHO, and it seems to contradict to [what Boto requires](http://boto.cloudhackers.com/en/latest/ref/ec2.html#boto.ec2.connection.EC2Connection.delete_security_group). ##### STEPS TO REPRODUCE Tried to execute the following task: ``` - name: ""Destroy Security Group for FE trusted ELB"" ec2_group: state: absent region: ""{{ region }}"" name: ""{{ owner }}_sg_{{ env }}_fe_elb_trst"" ``` ##### EXPECTED RESULTS Expected that the above task would execute without error. ##### ACTUAL RESULTS The above task raised an error: ``` missing required arguments: description ``` ",1, group requires description to delete sg issue type bug report component name group ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary group requires description parameter to be specified to delete a security group this doesn t make sense imho and it seems to contradict to steps to reproduce tried to execute the following task name destroy security group for fe trusted elb group state absent region region name owner sg env fe elb trst expected results expected that the above task would execute without error actual results the above task raised an error missing required arguments description ,1 1631,6572657006.0,IssuesEvent,2017-09-11 04:08:25,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,S3 Module 'Failed to connect to S3: Region does not seem to be available for aws module boto.s3.',affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - s3 ##### ANSIBLE VERSION ``` 2.1.0 - devel ``` ##### OS / ENVIRONMENT - CentOS 7.2 / MacOS X 10.11.4 - Boto 2.39.0 ##### SUMMARY Since commit `0dd58e932680af5d3544a045c5ea0bd0c9eadeb0` (Use connect_to_aws where possible) this error started to happen. Testing with one commit before: `344cf5fc0e2c8637fe9513206b2c843ca60264cf` it is working fine. ##### STEPS TO REPRODUCE ``` ansible -i localhost, -c local -m s3 -a 'bucket= object= dest=/tmp/file mode=get' localhost localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Failed to connect to S3: Region does not seem to be available for aws module boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path"" } ``` ##### EXPECTED RESULTS ``` localhost | SUCCESS => { ""changed"": true, ""msg"": ""GET operation complete"" } ``` ",True,"S3 Module 'Failed to connect to S3: Region does not seem to be available for aws module boto.s3.' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - s3 ##### ANSIBLE VERSION ``` 2.1.0 - devel ``` ##### OS / ENVIRONMENT - CentOS 7.2 / MacOS X 10.11.4 - Boto 2.39.0 ##### SUMMARY Since commit `0dd58e932680af5d3544a045c5ea0bd0c9eadeb0` (Use connect_to_aws where possible) this error started to happen. Testing with one commit before: `344cf5fc0e2c8637fe9513206b2c843ca60264cf` it is working fine. ##### STEPS TO REPRODUCE ``` ansible -i localhost, -c local -m s3 -a 'bucket= object= dest=/tmp/file mode=get' localhost localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Failed to connect to S3: Region does not seem to be available for aws module boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path"" } ``` ##### EXPECTED RESULTS ``` localhost | SUCCESS => { ""changed"": true, ""msg"": ""GET operation complete"" } ``` ",1, module failed to connect to region does not seem to be available for aws module boto issue type bug report component name ansible version devel os environment centos macos x boto summary since commit use connect to aws where possible this error started to happen testing with one commit before it is working fine steps to reproduce ansible i localhost c local m a bucket object dest tmp file mode get localhost localhost failed changed false failed true msg failed to connect to region does not seem to be available for aws module boto if the region definitely exists you may need to upgrade boto or extend with endpoints path expected results localhost success changed true msg get operation complete ,1 1787,6575880306.0,IssuesEvent,2017-09-11 17:41:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,group_by doesn't add hosts on second run (when group exists),affects_2.3 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION HEAD ##### OS / ENVIRONMENT N/A ##### SUMMARY When group_by is used twice, any new hosts are not added the second time. The host will have the group in group_names, but will not appear in groups.groupname. This seems to be a problem with the `self._inventory.clear_group_dict_cache()` call being misplaced in the new group creation block, not the host add block. Pull request submitted: https://github.com/ansible/ansible/pull/17766 ##### STEPS TO REPRODUCE ``` - name: Test grouping hosts: 192.168.2.245 192.168.2.246 tasks: - group_by: key: test when: inventory_hostname == '192.168.2.245' - group_by: key: test when: inventory_hostname == '192.168.2.246' - debug: var: groups.test - debug: var: group_names ``` ##### EXPECTED RESULTS ``` PLAY [Test grouping] *********************************************************** TASK [setup] ******************************************************************* ok: [192.168.2.245] ok: [192.168.2.246] TASK [group_by] **************************************************************** ok: [192.168.2.245] TASK [group_by] **************************************************************** ok: [192.168.2.246] TASK [debug] ******************************************************************* ok: [192.168.2.246] => { ""groups.test"": [ ""192.168.2.245"", ""192.168.2.246"" ] } ok: [192.168.2.245] => { ""groups.test"": [ ""192.168.2.245"", ""192.168.2.246"" ] } TASK [debug] ******************************************************************* ok: [192.168.2.245] => { ""group_names"": [ ""test"" ] } ok: [192.168.2.246] => { ""group_names"": [ ""test"" ] } ``` ##### ACTUAL RESULTS ``` PLAY [Test] ******************************************************************** TASK [setup] ******************************************************************* ok: [192.168.2.246] ok: [192.168.2.245] TASK [group_by] **************************************************************** ok: [192.168.2.245] TASK [group_by] **************************************************************** ok: [192.168.2.246] TASK [debug] ******************************************************************* ok: [192.168.2.246] => { ""groups.test"": [ ""192.168.2.245"" ] } ok: [192.168.2.245] => { ""groups.test"": [ ""192.168.2.245"" ] } TASK [debug] ******************************************************************* ok: [192.168.2.246] => { ""group_names"": [ ""test"" ] } ok: [192.168.2.245] => { ""group_names"": ""test"" ] } ``` ",True,"group_by doesn't add hosts on second run (when group exists) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION HEAD ##### OS / ENVIRONMENT N/A ##### SUMMARY When group_by is used twice, any new hosts are not added the second time. The host will have the group in group_names, but will not appear in groups.groupname. This seems to be a problem with the `self._inventory.clear_group_dict_cache()` call being misplaced in the new group creation block, not the host add block. Pull request submitted: https://github.com/ansible/ansible/pull/17766 ##### STEPS TO REPRODUCE ``` - name: Test grouping hosts: 192.168.2.245 192.168.2.246 tasks: - group_by: key: test when: inventory_hostname == '192.168.2.245' - group_by: key: test when: inventory_hostname == '192.168.2.246' - debug: var: groups.test - debug: var: group_names ``` ##### EXPECTED RESULTS ``` PLAY [Test grouping] *********************************************************** TASK [setup] ******************************************************************* ok: [192.168.2.245] ok: [192.168.2.246] TASK [group_by] **************************************************************** ok: [192.168.2.245] TASK [group_by] **************************************************************** ok: [192.168.2.246] TASK [debug] ******************************************************************* ok: [192.168.2.246] => { ""groups.test"": [ ""192.168.2.245"", ""192.168.2.246"" ] } ok: [192.168.2.245] => { ""groups.test"": [ ""192.168.2.245"", ""192.168.2.246"" ] } TASK [debug] ******************************************************************* ok: [192.168.2.245] => { ""group_names"": [ ""test"" ] } ok: [192.168.2.246] => { ""group_names"": [ ""test"" ] } ``` ##### ACTUAL RESULTS ``` PLAY [Test] ******************************************************************** TASK [setup] ******************************************************************* ok: [192.168.2.246] ok: [192.168.2.245] TASK [group_by] **************************************************************** ok: [192.168.2.245] TASK [group_by] **************************************************************** ok: [192.168.2.246] TASK [debug] ******************************************************************* ok: [192.168.2.246] => { ""groups.test"": [ ""192.168.2.245"" ] } ok: [192.168.2.245] => { ""groups.test"": [ ""192.168.2.245"" ] } TASK [debug] ******************************************************************* ok: [192.168.2.246] => { ""group_names"": [ ""test"" ] } ok: [192.168.2.245] => { ""group_names"": ""test"" ] } ``` ",1,group by doesn t add hosts on second run when group exists issue type bug report component name group by ansible version head os environment n a summary when group by is used twice any new hosts are not added the second time the host will have the group in group names but will not appear in groups groupname this seems to be a problem with the self inventory clear group dict cache call being misplaced in the new group creation block not the host add block pull request submitted steps to reproduce name test grouping hosts tasks group by key test when inventory hostname group by key test when inventory hostname debug var groups test debug var group names expected results play task ok ok task ok task ok task ok groups test ok groups test task ok group names test ok group names test actual results play task ok ok task ok task ok task ok groups test ok groups test task ok group names test ok group names test ,1 1695,6574217152.0,IssuesEvent,2017-09-11 12:00:51,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ability to stop or start EC2 instances without changing any other settings,affects_2.0 aws cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2 ##### ANSIBLE VERSION ``` ansible 2.0.1.0 (detached HEAD bb6cadefa2) last updated 2016/11/11 12:17:00 (GMT +000) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ability to stop or start EC2 instances without changing any other settings. Currently some settings such as ""termination_protection"" have a default value which will be applied to the ec2 instance. This causes issues when you have a generic playbook to stop/start EC2 instances and some instances will have termination protection on and some will have termination protection off. ##### STEPS TO REPRODUCE ``` # stop.yml tasks - name: gather ec2 facts action: ec2_facts - name: stop ec2 instance become: False local_action: module: 'ec2' instance_ids: '{{ ansible_ec2_instance_id }}' region: 'eu-west-1' state: stopped wait: True # start.yml tasks - name: gather ec2 facts action: ec2_facts - name: start ec2 instance become: False local_action: module: 'ec2' instance_ids: '{{ ansible_ec2_instance_id }}' region: 'eu-west-1' state: running wait: True ``` ##### EXPECTED RESULTS I would expect the ability to start or stop EC2 instances without the module attempting to change other settings. I guess this edge case exists on multiple cloud modules. My expectations: - parameter not defined by me and the relevant setting is not configured yet (e.g. new EC2 instance): use the default - parameter not defined by me and the relevant setting is already configured (e.g. existing EC2 instance): skip this parameter - parameter defined by me: configures the relevant setting ##### ACTUAL RESULTS The module attempted to turn termination protection off (the IAM credentials disallowed this action) ",True,"Ability to stop or start EC2 instances without changing any other settings - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2 ##### ANSIBLE VERSION ``` ansible 2.0.1.0 (detached HEAD bb6cadefa2) last updated 2016/11/11 12:17:00 (GMT +000) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ability to stop or start EC2 instances without changing any other settings. Currently some settings such as ""termination_protection"" have a default value which will be applied to the ec2 instance. This causes issues when you have a generic playbook to stop/start EC2 instances and some instances will have termination protection on and some will have termination protection off. ##### STEPS TO REPRODUCE ``` # stop.yml tasks - name: gather ec2 facts action: ec2_facts - name: stop ec2 instance become: False local_action: module: 'ec2' instance_ids: '{{ ansible_ec2_instance_id }}' region: 'eu-west-1' state: stopped wait: True # start.yml tasks - name: gather ec2 facts action: ec2_facts - name: start ec2 instance become: False local_action: module: 'ec2' instance_ids: '{{ ansible_ec2_instance_id }}' region: 'eu-west-1' state: running wait: True ``` ##### EXPECTED RESULTS I would expect the ability to start or stop EC2 instances without the module attempting to change other settings. I guess this edge case exists on multiple cloud modules. My expectations: - parameter not defined by me and the relevant setting is not configured yet (e.g. new EC2 instance): use the default - parameter not defined by me and the relevant setting is already configured (e.g. existing EC2 instance): skip this parameter - parameter defined by me: configures the relevant setting ##### ACTUAL RESULTS The module attempted to turn termination protection off (the IAM credentials disallowed this action) ",1,ability to stop or start instances without changing any other settings issue type feature idea component name ansible version ansible detached head last updated gmt configuration n a os environment n a summary ability to stop or start instances without changing any other settings currently some settings such as termination protection have a default value which will be applied to the instance this causes issues when you have a generic playbook to stop start instances and some instances will have termination protection on and some will have termination protection off steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used stop yml tasks name gather facts action facts name stop instance become false local action module instance ids ansible instance id region eu west state stopped wait true start yml tasks name gather facts action facts name start instance become false local action module instance ids ansible instance id region eu west state running wait true expected results i would expect the ability to start or stop instances without the module attempting to change other settings i guess this edge case exists on multiple cloud modules my expectations parameter not defined by me and the relevant setting is not configured yet e g new instance use the default parameter not defined by me and the relevant setting is already configured e g existing instance skip this parameter parameter defined by me configures the relevant setting actual results the module attempted to turn termination protection off the iam credentials disallowed this action ,1 1033,4827588515.0,IssuesEvent,2016-11-07 14:05:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cloudformation module throws error when stack is gone and state=absent,affects_2.3 aws bug_report cloud waiting_on_maintainer,"cloudformation module should return 'ok' if state= absent and stack does not exist (=absent). Instead, the module throws an error: failed: [ensure stack is gone] => {""failed"": true} msg: Stack with id does not exist Have not yet verified this, but looks like exception is being thrown in cfn.describe_stacks: if state == 'absent': try: invoke_with_throttling_retries(cfn.describe_stacks,stack_name) << (line 340) operation = 'DELETE' except Exception, err: error_msg = boto_exception(err) in ansible-modules/core/cloud/amazon/cloudformation.py ",True,"cloudformation module throws error when stack is gone and state=absent - cloudformation module should return 'ok' if state= absent and stack does not exist (=absent). Instead, the module throws an error: failed: [ensure stack is gone] => {""failed"": true} msg: Stack with id does not exist Have not yet verified this, but looks like exception is being thrown in cfn.describe_stacks: if state == 'absent': try: invoke_with_throttling_retries(cfn.describe_stacks,stack_name) << (line 340) operation = 'DELETE' except Exception, err: error_msg = boto_exception(err) in ansible-modules/core/cloud/amazon/cloudformation.py ",1,cloudformation module throws error when stack is gone and state absent cloudformation module should return ok if state absent and stack does not exist absent instead the module throws an error failed failed true msg stack with id does not exist have not yet verified this but looks like exception is being thrown in cfn describe stacks if state absent try invoke with throttling retries cfn describe stacks stack name line operation delete except exception err error msg boto exception err in ansible modules core cloud amazon cloudformation py ,1 1687,6574166501.0,IssuesEvent,2017-09-11 11:47:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Synchronize module ignores remote_user parameter,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Module `synchronize` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT macOS Sierra ##### SUMMARY Synchronize module doesn't use user specified in `remote_user` to connect to another system. Works correctly in `2.0.2.0`. ##### STEPS TO REPRODUCE You need a machine that runs Ansible. A host, let's call it `staging_machine`, where the `ansible_user` is `staging_user`. Another host called `my_server`, where the `ansible_user` is `deploy_user`. Both the ansible machine and `staging_machine` have keys on `my_server` for `deploy_user`. Then run ``` - name: Deploy java versions and scripts hosts: my_server any_errors_fatal: true tasks: - name: ""Install JDK"" synchronize: src=""jdk_folder"" dest=""deploy_area/jdkfolder/"" delegate_to: staging_machine remote_user: ""deploy_user"" ``` ##### EXPECTED RESULTS Files being copied. ##### ACTUAL RESULTS Server `staging_machine` tries to connect with it's own ansible_user `staging_user` instead of with `deploy_user`. ``` TASK [Install JDK] ************************************************************* staging_user@my_server's password: ``` This works in `2.0.2.0`.",True,"Synchronize module ignores remote_user parameter - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Module `synchronize` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT macOS Sierra ##### SUMMARY Synchronize module doesn't use user specified in `remote_user` to connect to another system. Works correctly in `2.0.2.0`. ##### STEPS TO REPRODUCE You need a machine that runs Ansible. A host, let's call it `staging_machine`, where the `ansible_user` is `staging_user`. Another host called `my_server`, where the `ansible_user` is `deploy_user`. Both the ansible machine and `staging_machine` have keys on `my_server` for `deploy_user`. Then run ``` - name: Deploy java versions and scripts hosts: my_server any_errors_fatal: true tasks: - name: ""Install JDK"" synchronize: src=""jdk_folder"" dest=""deploy_area/jdkfolder/"" delegate_to: staging_machine remote_user: ""deploy_user"" ``` ##### EXPECTED RESULTS Files being copied. ##### ACTUAL RESULTS Server `staging_machine` tries to connect with it's own ansible_user `staging_user` instead of with `deploy_user`. ``` TASK [Install JDK] ************************************************************* staging_user@my_server's password: ``` This works in `2.0.2.0`.",1,synchronize module ignores remote user parameter issue type bug report component name module synchronize ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment macos sierra summary synchronize module doesn t use user specified in remote user to connect to another system works correctly in steps to reproduce you need a machine that runs ansible a host let s call it staging machine where the ansible user is staging user another host called my server where the ansible user is deploy user both the ansible machine and staging machine have keys on my server for deploy user then run name deploy java versions and scripts hosts my server any errors fatal true tasks name install jdk synchronize src jdk folder dest deploy area jdkfolder delegate to staging machine remote user deploy user expected results files being copied actual results server staging machine tries to connect with it s own ansible user staging user instead of with deploy user task staging user my server s password this works in ,1 1675,6574105303.0,IssuesEvent,2017-09-11 11:30:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_command: Weird stdout & missing results[].cli_command field when a pipe is used,affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command but the issue might be caused by another module such as ""include_role"" for instance. ##### ANSIBLE VERSION ``` ansible --version 2.3.0 (commit 20161123.089ffae) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - **Local host**: Ubuntu 16.10 4.8 - **Target nodes**: IOSv_L2 15.2(4.0.55)E ##### SUMMARY This issue happens only when using a '|' pipe within a show command ('which is legit on Cisco CLI). No such weird stdout happens when the same command is used directly on the device CLI. Also, the results[].cli_command field is always present when the show command does not include a pipe. We can verify with results[].invocation.module_args.commands[0 that the right show command is sent to the device. ##### STEPS TO REPRODUCE roles/ios_pull_tables/tasks/main.yml ``` ... - name: Including the right module include: ""PACL_Table.yml"" ``` roles/ios_pull_tables/tasks/PACL_Table.yml ``` - name: Fetching interfaces facts from the remote node ios_facts: gather_subset: interfaces provider: ""{{ connections.ssh }}"" register: facts - name: Fetching PACL_Table on all L2 interfaces from the remote node ios_command: provider: ""{{ connections.ssh }}"" commands: - ""show run interface {{ net_item.key }} | include ^interface|access-group"" with_dict: ""{{ ansible_net_interfaces }}"" when: net_item.value.ipv4.address is not defined loop_control: loop_var: net_item register: table - name: Saving the fetched table locally include_role: name: save_table ``` roles/save_table/tasks/main.yml (with item=PACL_Table) ``` - name: Printing the returned table(s) debug: var=table.results ... - name: Saving ""{{ item }}"" into local file blockinfile: dest: ""{{ dest_file }}"" create: yes block: '{{ stdout_item.stdout[0] }}' marker: ""<--- {mark} {{item}} fetched with {{ stdout_item.cli_command }} --->"" insertafter: EOF with_items: ""{{ table.results }}"" loop_control: loop_var: stdout_item ignore_errors: yes ``` ##### EXPECTED RESULTS The results[].stdout[0] should contain the result of the previous show command. The results[].cli_command field should be accessed without issue allowing ""PACL_Table"" to be saved without error in this example ##### ACTUAL RESULTS ``` ... TASK [save_table : Printing the returned table(s)] ************************************************************************************************************************** ok: [IOSv_L2_10] => { ""table.results"": [ { ""_ansible_item_label"": { ""key"": ""GigabitEthernet1/2"", ""value"": { ""bandwidth"": 1000000, ""description"": ""Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']"", ""duplex"": ""Full"", ""ipv4"": null, ""lineprotocol"": ""up (connected) "", ""macaddress"": ""0036.2586.7e06"", ""mediatype"": ""unknown media type"", ""mtu"": 1500, ""operstatus"": ""up"", ""type"": ""iGbE"" } }, ""_ansible_item_result"": true, ""_ansible_no_log"": false, ""_ansible_parsed"": true, ""changed"": false, ""invocation"": { ""module_args"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": true, ""commands"": [ ""show run interface GigabitEthernet1/2 | include ^interface|access-group"" ], ""host"": ""172.21.100.210"", ""interval"": 1, ""match"": ""all"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""provider"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": true, ""host"": ""172.21.100.210"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""ssh_keyfile"": ""~/.ssh/id_rsa"", ""timeout"": 10, ""transport"": ""cli"", ""username"": ""admin"", ""version"": 2 }, ""retries"": 10, ""ssh_keyfile"": ""/root/.ssh/id_rsa"", ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""admin"", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""ios_command"" }, ""net_item"": { ""key"": ""GigabitEthernet1/2"", ""value"": { ""bandwidth"": 1000000, ""description"": ""Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']"", ""duplex"": ""Full"", ""ipv4"": null, ""lineprotocol"": ""up (connected) "", ""macaddress"": ""0036.2586.7e06"", ""mediatype"": ""unknown media type"", ""mtu"": 1500, ""operstatus"": ""up"", ""type"": ""iGbE"" } }, ""stdout"": [ ""show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup\ninterface GigabitEthernet1/2"" ], ""stdout_lines"": [ [ ""show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup"", ""interface GigabitEthernet1/2"" ] ], ""warnings"": [] }, ... TASK [save_table : Saving ""PACL_Table"" into local file] ********************************************************************************************************************* fatal: [IOSv_L2_10]: FAILED! => {""failed"": true, ""msg"": ""the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'cli_command'\n\nThe error appears to have been in '/home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/save_table/tasks/main.yml': line 80, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Saving \""{{ item }}\"" into local file\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \""{{ foo }}\""\n""} ``` We can see that: - **results[].stdout[0] is very strange**: it contains the command which is repeated & twisted several times, without any result. - **'dict object' has no attribute 'cli_command'** Despite a correct stdout on the device CLI: ``` IOSv_L2_10#show run interface G1/2 | include ^interface|access-group interface GigabitEthernet1/2 ip access-group ip_acl in mac access-group mac_acl in ```",True,"ios_command: Weird stdout & missing results[].cli_command field when a pipe is used - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command but the issue might be caused by another module such as ""include_role"" for instance. ##### ANSIBLE VERSION ``` ansible --version 2.3.0 (commit 20161123.089ffae) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - **Local host**: Ubuntu 16.10 4.8 - **Target nodes**: IOSv_L2 15.2(4.0.55)E ##### SUMMARY This issue happens only when using a '|' pipe within a show command ('which is legit on Cisco CLI). No such weird stdout happens when the same command is used directly on the device CLI. Also, the results[].cli_command field is always present when the show command does not include a pipe. We can verify with results[].invocation.module_args.commands[0 that the right show command is sent to the device. ##### STEPS TO REPRODUCE roles/ios_pull_tables/tasks/main.yml ``` ... - name: Including the right module include: ""PACL_Table.yml"" ``` roles/ios_pull_tables/tasks/PACL_Table.yml ``` - name: Fetching interfaces facts from the remote node ios_facts: gather_subset: interfaces provider: ""{{ connections.ssh }}"" register: facts - name: Fetching PACL_Table on all L2 interfaces from the remote node ios_command: provider: ""{{ connections.ssh }}"" commands: - ""show run interface {{ net_item.key }} | include ^interface|access-group"" with_dict: ""{{ ansible_net_interfaces }}"" when: net_item.value.ipv4.address is not defined loop_control: loop_var: net_item register: table - name: Saving the fetched table locally include_role: name: save_table ``` roles/save_table/tasks/main.yml (with item=PACL_Table) ``` - name: Printing the returned table(s) debug: var=table.results ... - name: Saving ""{{ item }}"" into local file blockinfile: dest: ""{{ dest_file }}"" create: yes block: '{{ stdout_item.stdout[0] }}' marker: ""<--- {mark} {{item}} fetched with {{ stdout_item.cli_command }} --->"" insertafter: EOF with_items: ""{{ table.results }}"" loop_control: loop_var: stdout_item ignore_errors: yes ``` ##### EXPECTED RESULTS The results[].stdout[0] should contain the result of the previous show command. The results[].cli_command field should be accessed without issue allowing ""PACL_Table"" to be saved without error in this example ##### ACTUAL RESULTS ``` ... TASK [save_table : Printing the returned table(s)] ************************************************************************************************************************** ok: [IOSv_L2_10] => { ""table.results"": [ { ""_ansible_item_label"": { ""key"": ""GigabitEthernet1/2"", ""value"": { ""bandwidth"": 1000000, ""description"": ""Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']"", ""duplex"": ""Full"", ""ipv4"": null, ""lineprotocol"": ""up (connected) "", ""macaddress"": ""0036.2586.7e06"", ""mediatype"": ""unknown media type"", ""mtu"": 1500, ""operstatus"": ""up"", ""type"": ""iGbE"" } }, ""_ansible_item_result"": true, ""_ansible_no_log"": false, ""_ansible_parsed"": true, ""changed"": false, ""invocation"": { ""module_args"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": true, ""commands"": [ ""show run interface GigabitEthernet1/2 | include ^interface|access-group"" ], ""host"": ""172.21.100.210"", ""interval"": 1, ""match"": ""all"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""provider"": { ""auth_pass"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""authorize"": true, ""host"": ""172.21.100.210"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""ssh_keyfile"": ""~/.ssh/id_rsa"", ""timeout"": 10, ""transport"": ""cli"", ""username"": ""admin"", ""version"": 2 }, ""retries"": 10, ""ssh_keyfile"": ""/root/.ssh/id_rsa"", ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""admin"", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""ios_command"" }, ""net_item"": { ""key"": ""GigabitEthernet1/2"", ""value"": { ""bandwidth"": 1000000, ""description"": ""Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']"", ""duplex"": ""Full"", ""ipv4"": null, ""lineprotocol"": ""up (connected) "", ""macaddress"": ""0036.2586.7e06"", ""mediatype"": ""unknown media type"", ""mtu"": 1500, ""operstatus"": ""up"", ""type"": ""iGbE"" } }, ""stdout"": [ ""show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup\ninterface GigabitEthernet1/2"" ], ""stdout_lines"": [ [ ""show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup"", ""interface GigabitEthernet1/2"" ] ], ""warnings"": [] }, ... TASK [save_table : Saving ""PACL_Table"" into local file] ********************************************************************************************************************* fatal: [IOSv_L2_10]: FAILED! => {""failed"": true, ""msg"": ""the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'cli_command'\n\nThe error appears to have been in '/home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/save_table/tasks/main.yml': line 80, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Saving \""{{ item }}\"" into local file\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \""{{ foo }}\""\n""} ``` We can see that: - **results[].stdout[0] is very strange**: it contains the command which is repeated & twisted several times, without any result. - **'dict object' has no attribute 'cli_command'** Despite a correct stdout on the device CLI: ``` IOSv_L2_10#show run interface G1/2 | include ^interface|access-group interface GigabitEthernet1/2 ip access-group ip_acl in mac access-group mac_acl in ```",1,ios command weird stdout missing results cli command field when a pipe is used issue type bug report component name ios command but the issue might be caused by another module such as include role for instance ansible version ansible version commit config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible git ansible roles roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment local host ubuntu target nodes iosv e summary this issue happens only when using a pipe within a show command which is legit on cisco cli no such weird stdout happens when the same command is used directly on the device cli also the results cli command field is always present when the show command does not include a pipe we can verify with results invocation module args commands that the right show command is sent to the device steps to reproduce roles ios pull tables tasks main yml name including the right module include pacl table yml roles ios pull tables tasks pacl table yml name fetching interfaces facts from the remote node ios facts gather subset interfaces provider connections ssh register facts name fetching pacl table on all interfaces from the remote node ios command provider connections ssh commands show run interface net item key include interface access group with dict ansible net interfaces when net item value address is not defined loop control loop var net item register table name saving the fetched table locally include role name save table roles save table tasks main yml with item pacl table name printing the returned table s debug var table results name saving item into local file blockinfile dest dest file create yes block stdout item stdout marker insertafter eof with items table results loop control loop var stdout item ignore errors yes expected results the results stdout should contain the result of the previous show command the results cli command field should be accessed without issue allowing pacl table to be saved without error in this example actual results task ok table results ansible item label key value bandwidth description connected to on its port duplex full null lineprotocol up connected macaddress mediatype unknown media type mtu operstatus up type igbe ansible item result true ansible no log false ansible parsed true changed false invocation module args auth pass value specified in no log parameter authorize true commands show run interface include interface access group host interval match all password value specified in no log parameter port provider auth pass value specified in no log parameter authorize true host password value specified in no log parameter port ssh keyfile ssh id rsa timeout transport cli username admin version retries ssh keyfile root ssh id rsa timeout transport cli use ssl true username admin validate certs true wait for null module name ios command net item key value bandwidth description connected to on its port duplex full null lineprotocol up connected macaddress mediatype unknown media type mtu operstatus up type igbe stdout show run interface include interface access terface include interface access g roup ninterface stdout lines show run interface include interface access terface include interface access g roup interface warnings task fatal failed failed true msg the field args has an invalid value which appears to include a variable that is undefined the error was dict object has no attribute cli command n nthe error appears to have been in home actionmystique program files ubuntu ansible roles roles save table tasks main yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n n name saving item into local file n here nwe could be wrong but this one looks like it might be an issue with nmissing quotes always quote template expression brackets when they nstart a value for instance n n with items n foo n nshould be written as n n with items n foo n we can see that results stdout is very strange it contains the command which is repeated twisted several times without any result dict object has no attribute cli command despite a correct stdout on the device cli iosv show run interface include interface access group interface ip access group ip acl in mac access group mac acl in ,1 907,4569335430.0,IssuesEvent,2016-09-15 16:55:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cloudformation module fails with ,affects_2.2 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloudformation ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY After a successful stack operation the module always fails when calling `exit_json(**results)` due to a bad result variable type. The cloudformation module tries to include stack events in module output. The stack events are an `itertools.imap` iterator which ansible doesn't know how to deal with when calling `exit_json()` and fails with a `Value of unknown type: `. The fix here can be as simple as wrapping the stack events variable in a call to `list()`. The more general fix would be to make ansible's `remove_values()` function accept `itertools.imap` objects. ##### STEPS TO REPRODUCE **ansible-cfn-bug.yml** ```yaml --- - hosts: localhost tasks: - name: deploy stack cloudformation: stack_name: ansible-cfn-bug state: present template: stack.json ``` **stack.json** ```json { ""Resources"": { ""S3B3QFCX"": { ""Type"": ""AWS::S3::Bucket"", ""Properties"": {} } } } ``` ```bash ansible-playbook -vvvv -c local ansible-cfn-bug.yml ``` ##### EXPECTED RESULTS Stack should be deployed. Deploy task should successfully complete. ##### ACTUAL RESULTS Deploy task fails for reasons listed above. ``` TASK [deploy stack] ************************************************************ task path: [redacted]/ansible-cfn-bug.yml:4 Using module file [redacted]/venv/local/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/cloudformation.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: [redacted] <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975 `"" && echo ansible-tmp-1473951676.35-213586073691975=""` echo $HOME/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpGPzHre TO [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/cloudformation.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/ [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/cloudformation.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '[redacted]/venv/bin/python2 [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/cloudformation.py; rm -rf ""[redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py"", line 402, in main() File ""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py"", line 395, in main module.exit_json(**result) File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 1798, in exit_json File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 387, in remove_values File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 387, in File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 398, in remove_values TypeError: Value of unknown type: , fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""cloudformation"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py\"", line 402, in \n main()\n File \""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py\"", line 395, in main\n module.exit_json(**result)\n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1798, in exit_json\n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 387, in remove_values\n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 387, in \n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 398, in remove_values\nTypeError: Value of unknown type: , \n"", ``` ",True,"cloudformation module fails with - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloudformation ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY After a successful stack operation the module always fails when calling `exit_json(**results)` due to a bad result variable type. The cloudformation module tries to include stack events in module output. The stack events are an `itertools.imap` iterator which ansible doesn't know how to deal with when calling `exit_json()` and fails with a `Value of unknown type: `. The fix here can be as simple as wrapping the stack events variable in a call to `list()`. The more general fix would be to make ansible's `remove_values()` function accept `itertools.imap` objects. ##### STEPS TO REPRODUCE **ansible-cfn-bug.yml** ```yaml --- - hosts: localhost tasks: - name: deploy stack cloudformation: stack_name: ansible-cfn-bug state: present template: stack.json ``` **stack.json** ```json { ""Resources"": { ""S3B3QFCX"": { ""Type"": ""AWS::S3::Bucket"", ""Properties"": {} } } } ``` ```bash ansible-playbook -vvvv -c local ansible-cfn-bug.yml ``` ##### EXPECTED RESULTS Stack should be deployed. Deploy task should successfully complete. ##### ACTUAL RESULTS Deploy task fails for reasons listed above. ``` TASK [deploy stack] ************************************************************ task path: [redacted]/ansible-cfn-bug.yml:4 Using module file [redacted]/venv/local/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/cloudformation.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: [redacted] <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975 `"" && echo ansible-tmp-1473951676.35-213586073691975=""` echo $HOME/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpGPzHre TO [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/cloudformation.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/ [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/cloudformation.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '[redacted]/venv/bin/python2 [redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/cloudformation.py; rm -rf ""[redacted]/.ansible/tmp/ansible-tmp-1473951676.35-213586073691975/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py"", line 402, in main() File ""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py"", line 395, in main module.exit_json(**result) File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 1798, in exit_json File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 387, in remove_values File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 387, in File ""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py"", line 398, in remove_values TypeError: Value of unknown type: , fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""cloudformation"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py\"", line 402, in \n main()\n File \""/tmp/ansible_rX9aA1/ansible_module_cloudformation.py\"", line 395, in main\n module.exit_json(**result)\n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1798, in exit_json\n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 387, in remove_values\n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 387, in \n File \""/tmp/ansible_rX9aA1/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 398, in remove_values\nTypeError: Value of unknown type: , \n"", ``` ",1,cloudformation module fails with issue type bug report component name cloudformation ansible version ansible configuration n a os environment n a summary after a successful stack operation the module always fails when calling exit json results due to a bad result variable type the cloudformation module tries to include stack events in module output the stack events are an itertools imap iterator which ansible doesn t know how to deal with when calling exit json and fails with a value of unknown type the fix here can be as simple as wrapping the stack events variable in a call to list the more general fix would be to make ansible s remove values function accept itertools imap objects steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible cfn bug yml yaml hosts localhost tasks name deploy stack cloudformation stack name ansible cfn bug state present template stack json stack json json resources type aws bucket properties bash ansible playbook vvvv c local ansible cfn bug yml expected results stack should be deployed deploy task should successfully complete actual results deploy task fails for reasons listed above task task path ansible cfn bug yml using module file venv local lib site packages ansible modules core cloud amazon cloudformation py establish local connection for user exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpgpzhre to ansible tmp ansible tmp cloudformation py exec bin sh c chmod u x ansible tmp ansible tmp ansible tmp ansible tmp cloudformation py sleep exec bin sh c venv bin ansible tmp ansible tmp cloudformation py rm rf ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module cloudformation py line in main file tmp ansible ansible module cloudformation py line in main module exit json result file tmp ansible ansible modlib zip ansible module utils basic py line in exit json file tmp ansible ansible modlib zip ansible module utils basic py line in remove values file tmp ansible ansible modlib zip ansible module utils basic py line in file tmp ansible ansible modlib zip ansible module utils basic py line in remove values typeerror value of unknown type fatal failed changed false failed true invocation module name cloudformation module stderr traceback most recent call last n file tmp ansible ansible module cloudformation py line in n main n file tmp ansible ansible module cloudformation py line in main n module exit json result n file tmp ansible ansible modlib zip ansible module utils basic py line in exit json n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values n file tmp ansible ansible modlib zip ansible module utils basic py line in n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values ntypeerror value of unknown type n ,1 1175,5096327818.0,IssuesEvent,2017-01-03 17:50:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,[2.2][hostname module] Could not set property: Failed to activate service 'org.freedesktop.hostname1': timed out,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME hostname module ##### ANSIBLE VERSION ``` # ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT local run on ubuntu xenial (inside lxc containers with kitchen-test+kitchen-ansible) ##### SUMMARY hostname module fails to set with above error ##### STEPS TO REPRODUCE ``` - debug: var=hostname_hostname - name: set hostname hostname: ""name={{ hostname_hostname }}"" ``` ##### EXPECTED RESULTS set hostname without error. was working with ansible 2.1 ##### ACTUAL RESULTS task is failing ``` TASK [hostname : debug] ******************************************************** task path: /tmp/kitchen/hostname/tasks/main.yml:3 ok: [localhost] => { ""hostname_hostname"": ""example"" } TASK [hostname : set hostname] ************************************************* task path: /tmp/kitchen/hostname/tasks/main.yml:4 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/hostname.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250 `"" && echo ansible-tmp-1478906878.17-90221180289250=""` echo $HOME/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250 `"" ) && sleep 0' PUT /tmp/tmpDdbhyS TO /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/hostname.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/ /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/hostname.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/hostname.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""name"": ""example"" }, ""module_name"": ""hostname"" }, ""msg"": ""Command failed rc=1, out=, err=Could not set property: Failed to activate service 'org.freedesktop.hostname1': timed out\n"" } to retry, use: --limit @/tmp/kitchen/default.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=1 ``` trying ``` systemctl restart systemd-logind ``` and restarting playbook doesn't help using hostname command directly is working. ",True,"[2.2][hostname module] Could not set property: Failed to activate service 'org.freedesktop.hostname1': timed out - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME hostname module ##### ANSIBLE VERSION ``` # ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT local run on ubuntu xenial (inside lxc containers with kitchen-test+kitchen-ansible) ##### SUMMARY hostname module fails to set with above error ##### STEPS TO REPRODUCE ``` - debug: var=hostname_hostname - name: set hostname hostname: ""name={{ hostname_hostname }}"" ``` ##### EXPECTED RESULTS set hostname without error. was working with ansible 2.1 ##### ACTUAL RESULTS task is failing ``` TASK [hostname : debug] ******************************************************** task path: /tmp/kitchen/hostname/tasks/main.yml:3 ok: [localhost] => { ""hostname_hostname"": ""example"" } TASK [hostname : set hostname] ************************************************* task path: /tmp/kitchen/hostname/tasks/main.yml:4 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/hostname.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250 `"" && echo ansible-tmp-1478906878.17-90221180289250=""` echo $HOME/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250 `"" ) && sleep 0' PUT /tmp/tmpDdbhyS TO /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/hostname.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/ /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/hostname.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/hostname.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1478906878.17-90221180289250/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""name"": ""example"" }, ""module_name"": ""hostname"" }, ""msg"": ""Command failed rc=1, out=, err=Could not set property: Failed to activate service 'org.freedesktop.hostname1': timed out\n"" } to retry, use: --limit @/tmp/kitchen/default.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=1 ``` trying ``` systemctl restart systemd-logind ``` and restarting playbook doesn't help using hostname command directly is working. ",1, could not set property failed to activate service org freedesktop timed out issue type bug report component name hostname module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment local run on ubuntu xenial inside lxc containers with kitchen test kitchen ansible summary hostname module fails to set with above error steps to reproduce debug var hostname hostname name set hostname hostname name hostname hostname expected results set hostname without error was working with ansible actual results task is failing task task path tmp kitchen hostname tasks main yml ok hostname hostname example task task path tmp kitchen hostname tasks main yml using module file usr local lib dist packages ansible modules core system hostname py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpddbhys to root ansible tmp ansible tmp hostname py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp hostname py sleep exec bin sh c usr bin python root ansible tmp ansible tmp hostname py rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args name example module name hostname msg command failed rc out err could not set property failed to activate service org freedesktop timed out n to retry use limit tmp kitchen default retry play recap localhost ok changed unreachable failed trying systemctl restart systemd logind and restarting playbook doesn t help using hostname command directly is working ,1 883,4543514348.0,IssuesEvent,2016-09-10 05:42:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Can not use state: started when specify log_driver in docker_container module,affects_2.1 bug_report cloud docker in progress waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When I use state: started to start container with container that created with log_driver option. It always remove container and show error Error creating container: 400 Client Error: Bad Request (""No command specified"") ##### STEPS TO REPRODUCE ``` --- - name: Test create and start container hosts: localhost connection: local gather_facts: no tasks: - docker_container: name: test image: nginx log_driver: gelf log_options: gelf-address: udp://graylog.example.com:12201 state: present - docker_container: name: test state: started ``` ##### EXPECTED RESULTS The container should start correctly ##### ACTUAL RESULTS ``` Using /home/username/git/ansible/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [Test Docker container] *************************************************** TASK [docker_container] ******************************************************** task path: /home/username/git/ansible/test.yml:7 ESTABLISH LOCAL CONNECTION FOR USER: username EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808 `"" && echo ansible-tmp-1472553339.39-107102396799808=""` echo $HOME/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808 `"" ) && sleep 0' PUT /tmp/tmpHnZwHc TO /home/username/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808/docker_container EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808/docker_container; rm -rf ""/home/username/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808/"" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {""ansible_facts"": {""ansible_docker_container"": {""AppArmorProfile"": """", ""Args"": [""-g"", ""daemon off;""], ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""nginx"", ""-g"", ""daemon off;""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"", ""NGINX_VERSION=1.11.3-1~jessie""], ""ExposedPorts"": {""443/tcp"": {}, ""80/tcp"": {}}, ""Hostname"": ""4dba25aa3867"", ""Image"": ""nginx"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-08-30T10:35:49.623067185Z"", ""Driver"": ""overlay2"", ""ExecIDs"": null, ""GraphDriver"": {""Data"": {""LowerDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7-init/diff:/var/lib/docker/overlay2/66ce9000f8d2ada07c7de4162b95a7fc120937c3f577d5ce992092129bf109ce/diff:/var/lib/docker/overlay2/8765c294e83460225a9626e86bb0ab00a33d6ff32ac4886c35526a00e2d80abf/diff:/var/lib/docker/overlay2/661c6e859f6ba5dc7c36934ec20f67aafaa8baa21af6e3e666539634ba7d96a7/diff"", ""MergedDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7/merged"", ""UpperDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7/diff"", ""WorkDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7/work""}, ""Name"": ""overlay2""}, ""HostConfig"": {""AutoRemove"": false, ""Binds"": [], ""BlkioDeviceReadBps"": null, ""BlkioDeviceReadIOps"": null, ""BlkioDeviceWriteBps"": null, ""BlkioDeviceWriteIOps"": null, ""BlkioWeight"": 0, ""BlkioWeightDevice"": null, ""CapAdd"": null, ""CapDrop"": null, ""Cgroup"": """", ""CgroupParent"": """", ""ConsoleSize"": [0, 0], ""ContainerIDFile"": """", ""CpuCount"": 0, ""CpuPercent"": 0, ""CpuPeriod"": 0, ""CpuQuota"": 0, ""CpuShares"": 0, ""CpusetCpus"": """", ""CpusetMems"": """", ""Devices"": null, ""DiskQuota"": 0, ""Dns"": null, ""DnsOptions"": null, ""DnsSearch"": null, ""ExtraHosts"": null, ""GroupAdd"": null, ""IOMaximumBandwidth"": 0, ""IOMaximumIOps"": 0, ""IpcMode"": """", ""Isolation"": """", ""KernelMemory"": 0, ""Links"": null, ""LogConfig"": {""Config"": {""gelf-address"": ""udp://graylog.example.com:12201""}, ""Type"": ""gelf""}, ""Memory"": 0, ""MemoryReservation"": 0, ""MemorySwap"": 0, ""MemorySwappiness"": -1, ""NetworkMode"": ""default"", ""OomKillDisable"": false, ""OomScoreAdj"": 0, ""PidMode"": """", ""PidsLimit"": 0, ""PortBindings"": null, ""Privileged"": false, ""PublishAllPorts"": false, ""ReadonlyRootfs"": false, ""RestartPolicy"": {""MaximumRetryCount"": 0, ""Name"": """"}, ""Runtime"": ""runc"", ""SecurityOpt"": null, ""ShmSize"": 67108864, ""UTSMode"": """", ""Ulimits"": null, ""UsernsMode"": """", ""VolumeDriver"": """", ""VolumesFrom"": null}, ""HostnamePath"": """", ""HostsPath"": """", ""Id"": ""4dba25aa3867c34d9fbefeb5ac350619ec3125ffd9ef6b3623916f060631c969"", ""Image"": ""sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b"", ""LogPath"": """", ""MountLabel"": """", ""Mounts"": [], ""Name"": ""/test"", ""NetworkSettings"": {""Bridge"": """", ""EndpointID"": """", ""Gateway"": """", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""HairpinMode"": false, ""IPAddress"": """", ""IPPrefixLen"": 0, ""IPv6Gateway"": """", ""LinkLocalIPv6Address"": """", ""LinkLocalIPv6PrefixLen"": 0, ""MacAddress"": """", ""Networks"": {""bridge"": {""Aliases"": null, ""EndpointID"": """", ""Gateway"": """", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""IPAMConfig"": null, ""IPAddress"": """", ""IPPrefixLen"": 0, ""IPv6Gateway"": """", ""Links"": null, ""MacAddress"": """", ""NetworkID"": """"}}, ""Ports"": null, ""SandboxID"": """", ""SandboxKey"": """", ""SecondaryIPAddresses"": null, ""SecondaryIPv6Addresses"": null}, ""Path"": ""nginx"", ""ProcessLabel"": """", ""ResolvConfPath"": """", ""RestartCount"": 0, ""State"": {""Dead"": false, ""Error"": """", ""ExitCode"": 0, ""FinishedAt"": ""0001-01-01T00:00:00Z"", ""OOMKilled"": false, ""Paused"": false, ""Pid"": 0, ""Restarting"": false, ""Running"": false, ""StartedAt"": ""0001-01-01T00:00:00Z"", ""Status"": ""created""}}}, ""changed"": true, ""invocation"": {""module_args"": {""api_version"": null, ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""cleanup"": false, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": null, ""ignore_image"": false, ""image"": ""nginx"", ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""gelf"", ""log_options"": {""gelf-address"": ""udp://graylog.example.com:12201""}, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""test"", ""network_mode"": null, ""networks"": null, ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": false, ""purge_networks"": null, ""read_only"": false, ""recreate"": false, ""restart"": false, ""restart_policy"": null, ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""present"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": null, ""volumes_from"": null}, ""module_name"": ""docker_container""}} TASK [docker_container] ******************************************************** task path: /home/username/git/ansible/test.yml:15 ESTABLISH LOCAL CONNECTION FOR USER: username EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115 `"" && echo ansible-tmp-1472553349.83-249876067720115=""` echo $HOME/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115 `"" ) && sleep 0' PUT /tmp/tmpeeGOlf TO /home/username/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115/docker_container EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115/docker_container; rm -rf ""/home/username/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_version"": null, ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""cleanup"": false, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": null, ""ignore_image"": false, ""image"": null, ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""json-file"", ""log_options"": null, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""test"", ""network_mode"": null, ""networks"": null, ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": false, ""purge_networks"": null, ""read_only"": false, ""recreate"": false, ""restart"": false, ""restart_policy"": null, ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""started"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": null, ""volumes_from"": null}, ""module_name"": ""docker_container""}, ""msg"": ""Error creating container: 400 Client Error: Bad Request (\""No command specified\"")""} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=1 changed=1 unreachable=0 failed=1 ```",True,"Can not use state: started when specify log_driver in docker_container module - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When I use state: started to start container with container that created with log_driver option. It always remove container and show error Error creating container: 400 Client Error: Bad Request (""No command specified"") ##### STEPS TO REPRODUCE ``` --- - name: Test create and start container hosts: localhost connection: local gather_facts: no tasks: - docker_container: name: test image: nginx log_driver: gelf log_options: gelf-address: udp://graylog.example.com:12201 state: present - docker_container: name: test state: started ``` ##### EXPECTED RESULTS The container should start correctly ##### ACTUAL RESULTS ``` Using /home/username/git/ansible/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [Test Docker container] *************************************************** TASK [docker_container] ******************************************************** task path: /home/username/git/ansible/test.yml:7 ESTABLISH LOCAL CONNECTION FOR USER: username EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808 `"" && echo ansible-tmp-1472553339.39-107102396799808=""` echo $HOME/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808 `"" ) && sleep 0' PUT /tmp/tmpHnZwHc TO /home/username/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808/docker_container EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808/docker_container; rm -rf ""/home/username/.ansible/tmp/ansible-tmp-1472553339.39-107102396799808/"" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {""ansible_facts"": {""ansible_docker_container"": {""AppArmorProfile"": """", ""Args"": [""-g"", ""daemon off;""], ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""nginx"", ""-g"", ""daemon off;""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"", ""NGINX_VERSION=1.11.3-1~jessie""], ""ExposedPorts"": {""443/tcp"": {}, ""80/tcp"": {}}, ""Hostname"": ""4dba25aa3867"", ""Image"": ""nginx"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-08-30T10:35:49.623067185Z"", ""Driver"": ""overlay2"", ""ExecIDs"": null, ""GraphDriver"": {""Data"": {""LowerDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7-init/diff:/var/lib/docker/overlay2/66ce9000f8d2ada07c7de4162b95a7fc120937c3f577d5ce992092129bf109ce/diff:/var/lib/docker/overlay2/8765c294e83460225a9626e86bb0ab00a33d6ff32ac4886c35526a00e2d80abf/diff:/var/lib/docker/overlay2/661c6e859f6ba5dc7c36934ec20f67aafaa8baa21af6e3e666539634ba7d96a7/diff"", ""MergedDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7/merged"", ""UpperDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7/diff"", ""WorkDir"": ""/var/lib/docker/overlay2/ab723dd251634a8c92d1a1fc933900861a9bddf52589e1303ebaaf0a899c5af7/work""}, ""Name"": ""overlay2""}, ""HostConfig"": {""AutoRemove"": false, ""Binds"": [], ""BlkioDeviceReadBps"": null, ""BlkioDeviceReadIOps"": null, ""BlkioDeviceWriteBps"": null, ""BlkioDeviceWriteIOps"": null, ""BlkioWeight"": 0, ""BlkioWeightDevice"": null, ""CapAdd"": null, ""CapDrop"": null, ""Cgroup"": """", ""CgroupParent"": """", ""ConsoleSize"": [0, 0], ""ContainerIDFile"": """", ""CpuCount"": 0, ""CpuPercent"": 0, ""CpuPeriod"": 0, ""CpuQuota"": 0, ""CpuShares"": 0, ""CpusetCpus"": """", ""CpusetMems"": """", ""Devices"": null, ""DiskQuota"": 0, ""Dns"": null, ""DnsOptions"": null, ""DnsSearch"": null, ""ExtraHosts"": null, ""GroupAdd"": null, ""IOMaximumBandwidth"": 0, ""IOMaximumIOps"": 0, ""IpcMode"": """", ""Isolation"": """", ""KernelMemory"": 0, ""Links"": null, ""LogConfig"": {""Config"": {""gelf-address"": ""udp://graylog.example.com:12201""}, ""Type"": ""gelf""}, ""Memory"": 0, ""MemoryReservation"": 0, ""MemorySwap"": 0, ""MemorySwappiness"": -1, ""NetworkMode"": ""default"", ""OomKillDisable"": false, ""OomScoreAdj"": 0, ""PidMode"": """", ""PidsLimit"": 0, ""PortBindings"": null, ""Privileged"": false, ""PublishAllPorts"": false, ""ReadonlyRootfs"": false, ""RestartPolicy"": {""MaximumRetryCount"": 0, ""Name"": """"}, ""Runtime"": ""runc"", ""SecurityOpt"": null, ""ShmSize"": 67108864, ""UTSMode"": """", ""Ulimits"": null, ""UsernsMode"": """", ""VolumeDriver"": """", ""VolumesFrom"": null}, ""HostnamePath"": """", ""HostsPath"": """", ""Id"": ""4dba25aa3867c34d9fbefeb5ac350619ec3125ffd9ef6b3623916f060631c969"", ""Image"": ""sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b"", ""LogPath"": """", ""MountLabel"": """", ""Mounts"": [], ""Name"": ""/test"", ""NetworkSettings"": {""Bridge"": """", ""EndpointID"": """", ""Gateway"": """", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""HairpinMode"": false, ""IPAddress"": """", ""IPPrefixLen"": 0, ""IPv6Gateway"": """", ""LinkLocalIPv6Address"": """", ""LinkLocalIPv6PrefixLen"": 0, ""MacAddress"": """", ""Networks"": {""bridge"": {""Aliases"": null, ""EndpointID"": """", ""Gateway"": """", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""IPAMConfig"": null, ""IPAddress"": """", ""IPPrefixLen"": 0, ""IPv6Gateway"": """", ""Links"": null, ""MacAddress"": """", ""NetworkID"": """"}}, ""Ports"": null, ""SandboxID"": """", ""SandboxKey"": """", ""SecondaryIPAddresses"": null, ""SecondaryIPv6Addresses"": null}, ""Path"": ""nginx"", ""ProcessLabel"": """", ""ResolvConfPath"": """", ""RestartCount"": 0, ""State"": {""Dead"": false, ""Error"": """", ""ExitCode"": 0, ""FinishedAt"": ""0001-01-01T00:00:00Z"", ""OOMKilled"": false, ""Paused"": false, ""Pid"": 0, ""Restarting"": false, ""Running"": false, ""StartedAt"": ""0001-01-01T00:00:00Z"", ""Status"": ""created""}}}, ""changed"": true, ""invocation"": {""module_args"": {""api_version"": null, ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""cleanup"": false, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": null, ""ignore_image"": false, ""image"": ""nginx"", ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""gelf"", ""log_options"": {""gelf-address"": ""udp://graylog.example.com:12201""}, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""test"", ""network_mode"": null, ""networks"": null, ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": false, ""purge_networks"": null, ""read_only"": false, ""recreate"": false, ""restart"": false, ""restart_policy"": null, ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""present"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": null, ""volumes_from"": null}, ""module_name"": ""docker_container""}} TASK [docker_container] ******************************************************** task path: /home/username/git/ansible/test.yml:15 ESTABLISH LOCAL CONNECTION FOR USER: username EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115 `"" && echo ansible-tmp-1472553349.83-249876067720115=""` echo $HOME/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115 `"" ) && sleep 0' PUT /tmp/tmpeeGOlf TO /home/username/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115/docker_container EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115/docker_container; rm -rf ""/home/username/.ansible/tmp/ansible-tmp-1472553349.83-249876067720115/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_version"": null, ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""cleanup"": false, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": null, ""ignore_image"": false, ""image"": null, ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""json-file"", ""log_options"": null, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""test"", ""network_mode"": null, ""networks"": null, ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": false, ""purge_networks"": null, ""read_only"": false, ""recreate"": false, ""restart"": false, ""restart_policy"": null, ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""started"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": null, ""volumes_from"": null}, ""module_name"": ""docker_container""}, ""msg"": ""Error creating container: 400 Client Error: Bad Request (\""No command specified\"")""} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=1 changed=1 unreachable=0 failed=1 ```",1,can not use state started when specify log driver in docker container module issue type bug report component name docker container ansible version ansible os environment n a summary when i use state started to start container with container that created with log driver option it always remove container and show error error creating container client error bad request no command specified steps to reproduce name test create and start container hosts localhost connection local gather facts no tasks docker container name test image nginx log driver gelf log options gelf address udp graylog example com state present docker container name test state started expected results the container should start correctly actual results using home username git ansible ansible cfg as config file loaded callback default of type stdout playbook test yml plays in test yml play task task path home username git ansible test yml establish local connection for user username exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmphnzwhc to home username ansible tmp ansible tmp docker container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home username ansible tmp ansible tmp docker container rm rf home username ansible tmp ansible tmp dev null sleep changed ansible facts ansible docker container apparmorprofile args config attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env exposedports tcp tcp hostname image nginx labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir created driver execids null graphdriver data lowerdir var lib docker init diff var lib docker diff var lib docker diff var lib docker diff mergeddir var lib docker merged upperdir var lib docker diff workdir var lib docker work name hostconfig autoremove false binds blkiodevicereadbps null blkiodevicereadiops null blkiodevicewritebps null blkiodevicewriteiops null blkioweight blkioweightdevice null capadd null capdrop null cgroup cgroupparent consolesize containeridfile cpucount cpupercent cpuperiod cpuquota cpushares cpusetcpus cpusetmems devices null diskquota dns null dnsoptions null dnssearch null extrahosts null groupadd null iomaximumbandwidth iomaximumiops ipcmode isolation kernelmemory links null logconfig config gelf address udp graylog example com type gelf memory memoryreservation memoryswap memoryswappiness networkmode default oomkilldisable false oomscoreadj pidmode pidslimit portbindings null privileged false publishallports false readonlyrootfs false restartpolicy maximumretrycount name runtime runc securityopt null shmsize utsmode ulimits null usernsmode volumedriver volumesfrom null hostnamepath hostspath id image logpath mountlabel mounts name test networksettings bridge endpointid gateway hairpinmode false ipaddress ipprefixlen macaddress networks bridge aliases null endpointid gateway ipamconfig null ipaddress ipprefixlen links null macaddress networkid ports null sandboxid sandboxkey secondaryipaddresses null null path nginx processlabel resolvconfpath restartcount state dead false error exitcode finishedat oomkilled false paused false pid restarting false running false startedat status created changed true invocation module args api version null blkio weight null cacert path null capabilities null cert path null cleanup false command null cpu period null cpu quota null cpu shares null cpuset cpus null cpuset mems null debug false detach true devices null dns opts null dns search domains null dns servers null docker host null entrypoint null env null env file null etc hosts null exposed ports null filter logger false force kill false groups null hostname null ignore image false image nginx interactive false ipc mode null keep volumes true kernel memory null key path null kill signal null labels null links null log driver gelf log options gelf address udp graylog example com mac address null memory memory reservation null memory swap null memory swappiness null name test network mode null networks null oom killer null paused false pid mode null privileged false published ports null pull false purge networks null read only false recreate false restart false restart policy null restart retries security opts null shm size null ssl version null state present stop signal null stop timeout null timeout null tls null tls hostname null tls verify null trust image content false tty false ulimits null user null uts null volume driver null volumes null volumes from null module name docker container task task path home username git ansible test yml establish local connection for user username exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpeegolf to home username ansible tmp ansible tmp docker container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home username ansible tmp ansible tmp docker container rm rf home username ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api version null blkio weight null cacert path null capabilities null cert path null cleanup false command null cpu period null cpu quota null cpu shares null cpuset cpus null cpuset mems null debug false detach true devices null dns opts null dns search domains null dns servers null docker host null entrypoint null env null env file null etc hosts null exposed ports null filter logger false force kill false groups null hostname null ignore image false image null interactive false ipc mode null keep volumes true kernel memory null key path null kill signal null labels null links null log driver json file log options null mac address null memory memory reservation null memory swap null memory swappiness null name test network mode null networks null oom killer null paused false pid mode null privileged false published ports null pull false purge networks null read only false recreate false restart false restart policy null restart retries security opts null shm size null ssl version null state started stop signal null stop timeout null timeout null tls null tls hostname null tls verify null trust image content false tty false ulimits null user null uts null volume driver null volumes null volumes from null module name docker container msg error creating container client error bad request no command specified no more hosts left play recap localhost ok changed unreachable failed ,1 1893,6577538313.0,IssuesEvent,2017-09-12 01:36:51,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Please provide yum swap capability,affects_2.0 feature_idea waiting_on_maintainer,"##### Issue Type: Feature Idea ##### Plugin Name: yum ##### Ansible Version: 2.0.1.0 ##### Environment: N/A ##### Summary: yum module does not provide swap capability ",True,"Please provide yum swap capability - ##### Issue Type: Feature Idea ##### Plugin Name: yum ##### Ansible Version: 2.0.1.0 ##### Environment: N/A ##### Summary: yum module does not provide swap capability ",1,please provide yum swap capability issue type feature idea plugin name yum ansible version environment n a summary yum module does not provide swap capability ,1 1833,6577362666.0,IssuesEvent,2017-09-12 00:23:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,list supported devices for ios and nxos,affects_2.1 docs_report networking waiting_on_maintainer," ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_* nxos_* ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY It would be good to have a complete list of supported devices. For example, ansible state here https://www.ansible.com/press/red-hat-brings-devops-to-the-network-with-new-ansible-capabilities that IOS-XE is supported. I have tried running against an IOS-XE 4500X switch with the ios_command module and there is no Python interpreter installed on the switch for ansible to start. ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"list supported devices for ios and nxos - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_* nxos_* ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY It would be good to have a complete list of supported devices. For example, ansible state here https://www.ansible.com/press/red-hat-brings-devops-to-the-network-with-new-ansible-capabilities that IOS-XE is supported. I have tried running against an IOS-XE 4500X switch with the ios_command module and there is no Python interpreter installed on the switch for ansible to start. ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,list supported devices for ios and nxos issue type documentation report component name ios nxos ansible version ansible config file home admin ansible ansible cfg configured module search path default w o overrides configuration n a os environment n a summary it would be good to have a complete list of supported devices for example ansible state here that ios xe is supported i have tried running against an ios xe switch with the ios command module and there is no python interpreter installed on the switch for ansible to start steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results ,1 1065,4889234068.0,IssuesEvent,2016-11-18 09:31:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,include_role: privilege escalation (nested role),affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/userdev/Documents/dlp-ansible/deploy_aqz/ansible.cfg configured module search path = ['library'] ``` (branch stable-2.2) ##### CONFIGURATION local roles and libraries location ##### OS / ENVIRONMENT Master: Ubuntu 16.04.2 Managed: Rhel 6.6 ##### SUMMARY privileges are not pass to nested role ##### STEPS TO REPRODUCE ``` - hosts: all gather_facts: True tasks: - command: ""whoami"" - include_role: name: ""role_test_a"" become: ""yes"" become_user: ""user2"" ``` role_test_a/tasks/main.yml ``` --- - command: ""whoami"" - include_role: name: ""role_test_b"" ``` role_test_b/tasks/main.yml ``` --- - command: ""whoami"" ``` ##### EXPECTED RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [command] ***************************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002667"", ""end"": ""2016-10-27 09:37:19.915705"", ""rc"": 0, ""start"": ""2016-10-27 09:37:19.913038"", ""stderr"": """", ""stdout"": ""user1"", ""stdout_lines"": [""user1""], ""warnings"": []} TASK [role_test_a : command] *************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.004548"", ""end"": ""2016-10-27 09:37:20.550015"", ""rc"": 0, ""start"": ""2016-10-27 09:37:20.545467"", ""stderr"": """", ""stdout"": ""user2"", ""stdout_lines"": [""user2""], ""warnings"": []} TASK [role_test_b : command] *************************************************** changed: [host ] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002869"", ""end"": ""2016-10-27 09:37:21.134721"", ""rc"": 0, ""start"": ""2016-10-27 09:37:21.131852"", ""stderr"": """", ""stdout"": ""user2"", ""stdout_lines"": [""user2""], ""warnings"": []} PLAY RECAP ********************************************************************* host : ok=7 changed=3 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [command] ***************************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002667"", ""end"": ""2016-10-27 09:37:19.915705"", ""rc"": 0, ""start"": ""2016-10-27 09:37:19.913038"", ""stderr"": """", ""stdout"": ""user1"", ""stdout_lines"": [""user1""], ""warnings"": []} TASK [role_test_a : command] *************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.004548"", ""end"": ""2016-10-27 09:37:20.550015"", ""rc"": 0, ""start"": ""2016-10-27 09:37:20.545467"", ""stderr"": """", ""stdout"": ""user2"", ""stdout_lines"": [""user2""], ""warnings"": []} TASK [role_test_b : command] *************************************************** changed: [host ] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002869"", ""end"": ""2016-10-27 09:37:21.134721"", ""rc"": 0, ""start"": ""2016-10-27 09:37:21.131852"", ""stderr"": """", ""stdout"": ""user1"", ""stdout_lines"": [""user1""], ""warnings"": []} PLAY RECAP ********************************************************************* host : ok=7 changed=3 unreachable=0 failed=0 ``` ",True,"include_role: privilege escalation (nested role) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/userdev/Documents/dlp-ansible/deploy_aqz/ansible.cfg configured module search path = ['library'] ``` (branch stable-2.2) ##### CONFIGURATION local roles and libraries location ##### OS / ENVIRONMENT Master: Ubuntu 16.04.2 Managed: Rhel 6.6 ##### SUMMARY privileges are not pass to nested role ##### STEPS TO REPRODUCE ``` - hosts: all gather_facts: True tasks: - command: ""whoami"" - include_role: name: ""role_test_a"" become: ""yes"" become_user: ""user2"" ``` role_test_a/tasks/main.yml ``` --- - command: ""whoami"" - include_role: name: ""role_test_b"" ``` role_test_b/tasks/main.yml ``` --- - command: ""whoami"" ``` ##### EXPECTED RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [command] ***************************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002667"", ""end"": ""2016-10-27 09:37:19.915705"", ""rc"": 0, ""start"": ""2016-10-27 09:37:19.913038"", ""stderr"": """", ""stdout"": ""user1"", ""stdout_lines"": [""user1""], ""warnings"": []} TASK [role_test_a : command] *************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.004548"", ""end"": ""2016-10-27 09:37:20.550015"", ""rc"": 0, ""start"": ""2016-10-27 09:37:20.545467"", ""stderr"": """", ""stdout"": ""user2"", ""stdout_lines"": [""user2""], ""warnings"": []} TASK [role_test_b : command] *************************************************** changed: [host ] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002869"", ""end"": ""2016-10-27 09:37:21.134721"", ""rc"": 0, ""start"": ""2016-10-27 09:37:21.131852"", ""stderr"": """", ""stdout"": ""user2"", ""stdout_lines"": [""user2""], ""warnings"": []} PLAY RECAP ********************************************************************* host : ok=7 changed=3 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [command] ***************************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002667"", ""end"": ""2016-10-27 09:37:19.915705"", ""rc"": 0, ""start"": ""2016-10-27 09:37:19.913038"", ""stderr"": """", ""stdout"": ""user1"", ""stdout_lines"": [""user1""], ""warnings"": []} TASK [role_test_a : command] *************************************************** changed: [host] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.004548"", ""end"": ""2016-10-27 09:37:20.550015"", ""rc"": 0, ""start"": ""2016-10-27 09:37:20.545467"", ""stderr"": """", ""stdout"": ""user2"", ""stdout_lines"": [""user2""], ""warnings"": []} TASK [role_test_b : command] *************************************************** changed: [host ] => {""changed"": true, ""cmd"": [""whoami""], ""delta"": ""0:00:00.002869"", ""end"": ""2016-10-27 09:37:21.134721"", ""rc"": 0, ""start"": ""2016-10-27 09:37:21.131852"", ""stderr"": """", ""stdout"": ""user1"", ""stdout_lines"": [""user1""], ""warnings"": []} PLAY RECAP ********************************************************************* host : ok=7 changed=3 unreachable=0 failed=0 ``` ",1,include role privilege escalation nested role issue type bug report component name include role ansible version ansible config file home userdev documents dlp ansible deploy aqz ansible cfg configured module search path branch stable configuration local roles and libraries location os environment master ubuntu managed rhel summary privileges are not pass to nested role steps to reproduce hosts all gather facts true tasks command whoami include role name role test a become yes become user role test a tasks main yml command whoami include role name role test b role test b tasks main yml command whoami expected results play task ok task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings play recap host ok changed unreachable failed actual results play task ok task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings play recap host ok changed unreachable failed ,1 1360,5872483397.0,IssuesEvent,2017-05-15 11:39:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"win_stat returns md5, but the data is sha1",affects_1.9 bug_report docs_report waiting_on_maintainer windows,"##### Issue Type: - Bug Report ##### Component Name: win_stat module ##### Ansible Version: ansible 1.9.3 (even though I'm on 1.9.3, I provide links to code in the devel branch) ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: The win_stat modules reportedly returns a checksum and md5, but in all reality it is a sha1. ##### Steps To Reproduce: Run win_stat on a file and take note of 'md5' result (and the 'checksum' result if desired). ##### Expected Results: I would expect 'md5' result to be the actual md5sum of the file. Additionally, if `sha1` was returned in the result, I would expect it to be the actual sha1sum of the file. ##### Actual Results: In actuality, the 'md5' result is the sha1sum of the file. You can easily see the bug in the devel branch at these two links: https://github.com/ansible/ansible-modules-core/blob/devel/windows/win_stat.ps1#L68 https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell.ps1#L213 ",True,"win_stat returns md5, but the data is sha1 - ##### Issue Type: - Bug Report ##### Component Name: win_stat module ##### Ansible Version: ansible 1.9.3 (even though I'm on 1.9.3, I provide links to code in the devel branch) ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: The win_stat modules reportedly returns a checksum and md5, but in all reality it is a sha1. ##### Steps To Reproduce: Run win_stat on a file and take note of 'md5' result (and the 'checksum' result if desired). ##### Expected Results: I would expect 'md5' result to be the actual md5sum of the file. Additionally, if `sha1` was returned in the result, I would expect it to be the actual sha1sum of the file. ##### Actual Results: In actuality, the 'md5' result is the sha1sum of the file. You can easily see the bug in the devel branch at these two links: https://github.com/ansible/ansible-modules-core/blob/devel/windows/win_stat.ps1#L68 https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell.ps1#L213 ",1,win stat returns but the data is issue type bug report component name win stat module ansible version ansible even though i m on i provide links to code in the devel branch ansible configuration n a environment n a summary the win stat modules reportedly returns a checksum and but in all reality it is a steps to reproduce run win stat on a file and take note of result and the checksum result if desired expected results i would expect result to be the actual of the file additionally if was returned in the result i would expect it to be the actual of the file actual results in actuality the result is the of the file you can easily see the bug in the devel branch at these two links ,1 1869,6577493199.0,IssuesEvent,2017-09-12 01:17:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,subversion update documentation.,affects_2.3 docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME subversion ##### SUMMARY - `For password / secret arguments no_log=True should be set`: This isn't set for the `password` documentation section. - `Requirements should be documented, using the requirements=[] field`: The `svn` requirement is documented under `notes`, but it looks like that should migrate to `requirements`. - `Does module use check_mode? Could it be modified to use it? Document it`: It does, its not documented. ",True,"subversion update documentation. - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME subversion ##### SUMMARY - `For password / secret arguments no_log=True should be set`: This isn't set for the `password` documentation section. - `Requirements should be documented, using the requirements=[] field`: The `svn` requirement is documented under `notes`, but it looks like that should migrate to `requirements`. - `Does module use check_mode? Could it be modified to use it? Document it`: It does, its not documented. ",1,subversion update documentation issue type documentation report component name subversion summary for password secret arguments no log true should be set this isn t set for the password documentation section requirements should be documented using the requirements field the svn requirement is documented under notes but it looks like that should migrate to requirements does module use check mode could it be modified to use it document it it does its not documented ,1 1800,6575922908.0,IssuesEvent,2017-09-11 17:50:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Enhancement request: add ability to modify instance user-data after initial launch.,affects_2.2 aws cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2.py ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700) lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700) lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY Instance user-data cannot be set or updated except on the first launch of an instance. ##### STEPS TO REPRODUCE This playbook has no effect on user-data whatsoever: ``` - name: Stop nodes to set user-data ec2: region: ""{{ aws_region }}"" state: ""stopped"" wait: yes instance_ids: ""i-12345"" - name: Start nodes after setting user data ec2: region: ""{{ aws_region }}"" state: ""running"" user_data: ""{{ user_data }}"" wait: no instance_ids: ""i-12345"" ``` ##### EXPECTED RESULTS I expect the user-data to get updated in EC2 and the instance to start up. ##### ACTUAL RESULTS The instance starts up but user-data remains unaffected. ",True,"Enhancement request: add ability to modify instance user-data after initial launch. - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2.py ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700) lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700) lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY Instance user-data cannot be set or updated except on the first launch of an instance. ##### STEPS TO REPRODUCE This playbook has no effect on user-data whatsoever: ``` - name: Stop nodes to set user-data ec2: region: ""{{ aws_region }}"" state: ""stopped"" wait: yes instance_ids: ""i-12345"" - name: Start nodes after setting user data ec2: region: ""{{ aws_region }}"" state: ""running"" user_data: ""{{ user_data }}"" wait: no instance_ids: ""i-12345"" ``` ##### EXPECTED RESULTS I expect the user-data to get updated in EC2 and the instance to start up. ##### ACTUAL RESULTS The instance starts up but user-data remains unaffected. ",1,enhancement request add ability to modify instance user data after initial launch issue type feature idea component name py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary instance user data cannot be set or updated except on the first launch of an instance steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used this playbook has no effect on user data whatsoever name stop nodes to set user data region aws region state stopped wait yes instance ids i name start nodes after setting user data region aws region state running user data user data wait no instance ids i expected results i expect the user data to get updated in and the instance to start up actual results the instance starts up but user data remains unaffected ,1 923,4622718442.0,IssuesEvent,2016-09-27 08:36:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_config isn't idempotent (in some cases),affects_2.2 bug_report networking P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 1c7890bf86) last updated 2016/09/26 10:42:34 (GMT +100) lib/ansible/modules/core: (devel cf243860ff) last updated 2016/09/26 10:42:39 (GMT +100) lib/ansible/modules/extras: (devel 7aab9cd93b) last updated 2016/09/26 10:42:41 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY May be similar issue to nxos_config https://github.com/ansible/ansible-modules-core/issues/4963 ##### STEPS TO REPRODUCE ```yaml --- - debug: msg=""START cli/src_match_none.yaml"" - name: setup ios_config: commands: - no description - no shutdown parents: - interface Loopback999 match: none provider: ""{{ cli }}"" - name: configure device with config ios_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: - ""result.changed == true"" # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.updates is not defined"" - name: check device with config ios_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: # Idempotent test # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.changed == false"" - ""result.updates is not defined"" - debug: msg=""END cli/src_match_none.yaml"" ``` **templates/basic/config.j2** ``` interface Loopback999 description this is a test shutdown ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ```yaml TASK [test_ios_config : configure device with config] ************************** task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:14 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/ios/ios_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897 `"" && echo ansible-tmp-1474895611.33-274042674420897=""` echo $HOME/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897 `"" ) && sleep 0' PUT /tmp/tmp1I6w4V TO /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ios_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ios_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ios_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/"" > /dev/null 2>&1 && sleep 0' changed: [ios01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""ios01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""ios01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Loopback999\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_ios_config : assert] ************************************************ task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:21 ok: [ios01] => { ""changed"": false, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == true"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" }, ""msg"": ""All assertions passed"" } TASK [test_ios_config : check device with config] ****************************** task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:27 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/ios/ios_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612 `"" && echo ansible-tmp-1474895624.05-125920805740612=""` echo $HOME/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612 `"" ) && sleep 0' PUT /tmp/tmpkKpaLt TO /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ios_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ios_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ios_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/"" > /dev/null 2>&1 && sleep 0' changed: [ios01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""ios01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""ios01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Loopback999\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_ios_config : assert] ************************************************ task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:34 fatal: [ios01]: FAILED! => { ""assertion"": ""result.changed == false"", ""changed"": false, ""evaluated_to"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == false"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" } } ``` ",True,"ios_config isn't idempotent (in some cases) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 1c7890bf86) last updated 2016/09/26 10:42:34 (GMT +100) lib/ansible/modules/core: (devel cf243860ff) last updated 2016/09/26 10:42:39 (GMT +100) lib/ansible/modules/extras: (devel 7aab9cd93b) last updated 2016/09/26 10:42:41 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY May be similar issue to nxos_config https://github.com/ansible/ansible-modules-core/issues/4963 ##### STEPS TO REPRODUCE ```yaml --- - debug: msg=""START cli/src_match_none.yaml"" - name: setup ios_config: commands: - no description - no shutdown parents: - interface Loopback999 match: none provider: ""{{ cli }}"" - name: configure device with config ios_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: - ""result.changed == true"" # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.updates is not defined"" - name: check device with config ios_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: # Idempotent test # https://github.com/ansible/ansible-modules-core/issues/4807 - ""result.changed == false"" - ""result.updates is not defined"" - debug: msg=""END cli/src_match_none.yaml"" ``` **templates/basic/config.j2** ``` interface Loopback999 description this is a test shutdown ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ```yaml TASK [test_ios_config : configure device with config] ************************** task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:14 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/ios/ios_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897 `"" && echo ansible-tmp-1474895611.33-274042674420897=""` echo $HOME/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897 `"" ) && sleep 0' PUT /tmp/tmp1I6w4V TO /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ios_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ios_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/ios_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474895611.33-274042674420897/"" > /dev/null 2>&1 && sleep 0' changed: [ios01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""ios01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""ios01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Loopback999\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_ios_config : assert] ************************************************ task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:21 ok: [ios01] => { ""changed"": false, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == true"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" }, ""msg"": ""All assertions passed"" } TASK [test_ios_config : check device with config] ****************************** task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:27 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/ios/ios_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612 `"" && echo ansible-tmp-1474895624.05-125920805740612=""` echo $HOME/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612 `"" ) && sleep 0' PUT /tmp/tmpkKpaLt TO /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ios_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ios_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/ios_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474895624.05-125920805740612/"" > /dev/null 2>&1 && sleep 0' changed: [ios01] => { ""changed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""ios01"", ""lines"": null, ""match"": ""none"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""ios01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": false, ""src"": ""interface Loopback999\n description this is a test\n shutdown\n\n"", ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""warnings"": [] } TASK [test_ios_config : assert] ************************************************ task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/src_match_none.yaml:34 fatal: [ios01]: FAILED! => { ""assertion"": ""result.changed == false"", ""changed"": false, ""evaluated_to"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""that"": [ ""result.changed == false"", ""result.updates is not defined"" ] }, ""module_name"": ""assert"" } } ``` ",1,ios config isn t idempotent in some cases issue type bug report component name ios config ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary may be similar issue to nxos config steps to reproduce yaml debug msg start cli src match none yaml name setup ios config commands no description no shutdown parents interface match none provider cli name configure device with config ios config src basic config provider cli match none register result assert that result changed true result updates is not defined name check device with config ios config src basic config provider cli match none register result assert that idempotent test result changed false result updates is not defined debug msg end cli src match none yaml templates basic config interface description this is a test shutdown expected results actual results yaml task task path home johnb git ansible inc testing ios roles test ios config tests cli src match none yaml using module file home johnb git ansible inc ansible lib ansible modules core network ios ios config py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home johnb ansible tmp ansible tmp ios config py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp ios config py sleep exec bin sh c python home johnb ansible tmp ansible tmp ios config py rm rf home johnb ansible tmp ansible tmp dev null sleep changed changed true invocation module args after null auth pass null authorize false backup false before null config null defaults false force false host lines null match none parents null password value specified in no log parameter port null provider host password value specified in no log parameter transport cli username value specified in no log parameter replace line save false src interface n description this is a test n shutdown n n ssh keyfile null timeout transport cli use ssl true username value specified in no log parameter validate certs true warnings task task path home johnb git ansible inc testing ios roles test ios config tests cli src match none yaml ok changed false invocation module args that result changed true result updates is not defined module name assert msg all assertions passed task task path home johnb git ansible inc testing ios roles test ios config tests cli src match none yaml using module file home johnb git ansible inc ansible lib ansible modules core network ios ios config py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpkkpalt to home johnb ansible tmp ansible tmp ios config py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp ios config py sleep exec bin sh c python home johnb ansible tmp ansible tmp ios config py rm rf home johnb ansible tmp ansible tmp dev null sleep changed changed true invocation module args after null auth pass null authorize false backup false before null config null defaults false force false host lines null match none parents null password value specified in no log parameter port null provider host password value specified in no log parameter transport cli username value specified in no log parameter replace line save false src interface n description this is a test n shutdown n n ssh keyfile null timeout transport cli use ssl true username value specified in no log parameter validate certs true warnings task task path home johnb git ansible inc testing ios roles test ios config tests cli src match none yaml fatal failed assertion result changed false changed false evaluated to false failed true invocation module args that result changed false result updates is not defined module name assert ,1 940,4652445906.0,IssuesEvent,2016-10-03 14:02:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible os_server module doesnot work with async_status,affects_2.1 bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module : os_server and async_status ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Tried using os_server module in using async , its failing to handle , output of the job . ##### STEPS TO REPRODUCE Create a playbook to boot a single instance on openstack async ``` --- - name: test the os_server module on async hosts: localhost connection: local gather_facts: false tasks: - name: ""provision os_server resources"" os_server: state: ""present"" auth: auth_url: ""http://localhost:5000/v2.0/"" username: ""openstackusername"" password: ""openstackpassword"" project_name: ""openstackprojectname"" name: ""helloinstance"" image: ""rhel-6.5_jeos"" key_name: ""test_keypair"" api_timeout: 99999 flavor: ""m1.small"" network: ""testnetwork"" async: 1000 poll: 0 register: yum_sleeper - name: 'check on fire and forget task' async_status: jid: ""{{ yum_sleeper.ansible_job_id }}"" register: job_result until: job_result.finished retries: 30 ``` ##### EXPECTED RESULTS Expected , job output with openstack server details . ##### ACTUAL RESULTS https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e command: ``` ansible-playbook -vvvvvv pluck_os.yml ``` Actual output: https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e ``` Using /etc/ansible/ansible.cfg as config file [WARNING]: provided hosts list is empty, only localhost is available Loaded callback default of type stdout, v2.0 PLAYBOOK: pluck_os.yml ********************************************************* 1 plays in pluck_os.yml PLAY [test the os_server module on async] ************************************** TASK [provision/deprovision os_server resources by looping on count] *********** task path: /root/linch-pin/pluck_os.yml:7 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `"" && echo ansible-tmp-1470853932.03-182394085106578=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpF3aQ4d TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server <127.0.0.1> PUT /tmp/tmpQqJAgL TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper <127.0.0.1> EXEC /bin/sh -c 'chmod -R u+x /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper 501588893815 1000 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""ansible_job_id"": ""501588893815.445"", ""changed"": false, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""started"": 1} TASK [check on fire and forget task] ******************************************* task path: /root/linch-pin/pluck_os.yml:24 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `"" && echo ansible-tmp-1470853933.23-277604438370373=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpIkSVL9 TO /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (29 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 1, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `"" && echo ansible-tmp-1470853938.32-242588126297954=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmp4jd2U7 TO /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (28 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 2, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `"" && echo ansible-tmp-1470853943.42-153837274674906=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpchWAZ5 TO /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (27 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 3, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `"" && echo ansible-tmp-1470853948.51-150710275020038=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpTFyDJs TO /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (26 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 4, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `"" && echo ansible-tmp-1470853953.6-191645118737367=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpFrI9Bt TO /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""ansible_job_id"": ""501588893815.445"", ""changed"": false, ""failed"": true, ""finished"": 1, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""msg"": ""Could not parse job output: No handlers could be found for logger \""keystoneauth.identity.base\""\n\n{\""invocation\"": {\""module_args\"": {\""auth_type\"": null, \""availability_zone\"": null, \""image\"": \""rhel-6.5_jeos\"", \""image_exclude\"": \""(deprecated)\"", \""flavor_include\"": null, \""meta\"": null, \""flavor\"": \""m1.small\"", \""cloud\"": null, \""scheduler_hints\"": null, \""boot_from_volume\"": false, \""userdata\"": null, \""network\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""nics\"": [], \""floating_ips\"": null, \""flavor_ram\"": null, \""volume_size\"": false, \""state\"": \""present\"", \""auto_ip\"": true, \""security_groups\"": [\""default\""], \""config_drive\"": false, \""volumes\"": [], \""key_name\"": \""ci-factory\"", \""api_timeout\"": 99999, \""auth\"": {\""username\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""project_name\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""password\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""auth_url\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\""}, \""endpoint_type\"": \""public\"", \""boot_volume\"": null, \""key\"": null, \""cacert\"": null, \""wait\"": true, \""name\"": \""helloinstance\"", \""region_name\"": null, \""timeout\"": 180, \""cert\"": null, \""terminate_volume\"": false, \""verify\"": true, \""floating_ip_pools\"": null}}, \""openstack\"": {\""OS-EXT-STS:task_state\"": null, \""addresses\"": {\""e2e-openstack\"": [{\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""172.16.100.97\"", \""OS-EXT-IPS:type\"": \""fixed\""}, {\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""10.8.183.233\"", \""OS-EXT-IPS:type\"": \""floating\""}]}, \""image\"": {\""id\"": \""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\"", \""name\"": \""rhel-6.5_jeos\""}, \""OS-EXT-STS:vm_state\"": \""active\"", \""OS-SRV-USG:launched_at\"": \""2016-08-10T18:32:23.000000\"", \""NAME_ATTR\"": \""name\"", \""flavor\"": {\""id\"": \""2\"", \""name\"": \""m1.small\""}, \""az\"": \""nova\"", \""id\"": \""b66ba857-5740-4c13-9e70-191883398af8\"", \""cloud\"": \""defaults\"", \""user_id\"": \""9c770dbddda444799e627004fee26e0a\"", \""OS-DCF:diskConfig\"": \""MANUAL\"", \""networks\"": {\""e2e-openstack\"": [\""172.16.100.97\"", \""10.8.183.233\""]}, \""accessIPv4\"": \""10.8.183.233\"", \""accessIPv6\"": \""\"", \""security_groups\"": [{\""id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""name\"": \""default\"", \""security_group_rules\"": [{\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""ade9fcb9-14c1-4975-a04d-6007f80005c1\""}, {\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""d03e4bae-24b6-415a-a30c-ee0d060f566f\""}], \""description\"": \""Default security group\""}], \""key_name\"": \""ci-factory\"", \""progress\"": 0, \""OS-EXT-STS:power_state\"": 1, \""OS-EXT-AZ:availability_zone\"": \""nova\"", \""metadata\"": {}, \""status\"": \""ACTIVE\"", \""updated\"": \""2016-08-10T18:32:23Z\"", \""hostId\"": \""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\"", \""HUMAN_ID\"": true, \""OS-SRV-USG:terminated_at\"": null, \""public_v4\"": \""10.8.183.233\"", \""public_v6\"": \""\"", \""private_v4\"": \""172.16.100.97\"", \""interface_ip\"": \""10.8.183.233\"", \""name\"": \""helloinstance\"", \""created\"": \""2016-08-10T18:32:16Z\"", \""tenant_id\"": \""f1dda47890754241a3e111f9b7394707\"", \""region\"": \""\"", \""adminPass\"": \""tX4SZ3gV85Sb\"", \""os-extended-volumes:volumes_attached\"": [], \""volumes\"": [], \""config_drive\"": \""\"", \""human_id\"": \""helloinstance\""}, \""changed\"": true, \""id\"": \""b66ba857-5740-4c13-9e70-191883398af8\"", \""server\"": {\""OS-EXT-STS:task_state\"": null, \""addresses\"": {\""e2e-openstack\"": [{\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""172.16.100.97\"", \""OS-EXT-IPS:type\"": \""fixed\""}, {\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""10.8.183.233\"", \""OS-EXT-IPS:type\"": \""floating\""}]}, \""image\"": {\""id\"": \""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\"", \""name\"": \""rhel-6.5_jeos\""}, \""OS-EXT-STS:vm_state\"": \""active\"", \""OS-SRV-USG:launched_at\"": \""2016-08-10T18:32:23.000000\"", \""NAME_ATTR\"": \""name\"", \""flavor\"": {\""id\"": \""2\"", \""name\"": \""m1.small\""}, \""az\"": \""nova\"", \""id\"": \""b66ba857-5740-4c13-9e70-191883398af8\"", \""cloud\"": \""defaults\"", \""user_id\"": \""9c770dbddda444799e627004fee26e0a\"", \""OS-DCF:diskConfig\"": \""MANUAL\"", \""networks\"": {\""e2e-openstack\"": [\""172.16.100.97\"", \""10.8.183.233\""]}, \""accessIPv4\"": \""10.8.183.233\"", \""accessIPv6\"": \""\"", \""security_groups\"": [{\""id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""name\"": \""default\"", \""security_group_rules\"": [{\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""ade9fcb9-14c1-4975-a04d-6007f80005c1\""}, {\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""d03e4bae-24b6-415a-a30c-ee0d060f566f\""}], \""description\"": \""Default security group\""}], \""key_name\"": \""ci-factory\"", \""progress\"": 0, \""OS-EXT-STS:power_state\"": 1, \""OS-EXT-AZ:availability_zone\"": \""nova\"", \""metadata\"": {}, \""status\"": \""ACTIVE\"", \""updated\"": \""2016-08-10T18:32:23Z\"", \""hostId\"": \""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\"", \""HUMAN_ID\"": true, \""OS-SRV-USG:terminated_at\"": null, \""public_v4\"": \""10.8.183.233\"", \""public_v6\"": \""\"", \""private_v4\"": \""172.16.100.97\"", \""interface_ip\"": \""10.8.183.233\"", \""name\"": \""helloinstance\"", \""created\"": \""2016-08-10T18:32:16Z\"", \""tenant_id\"": \""f1dda47890754241a3e111f9b7394707\"", \""region\"": \""\"", \""adminPass\"": \""tX4SZ3gV85Sb\"", \""os-extended-volumes:volumes_attached\"": [], \""volumes\"": [], \""config_drive\"": \""\"", \""human_id\"": \""helloinstance\""}}\n{\""msg\"": \""Traceback (most recent call last):\\n File \\\""/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper\\\"", line 89, in _run_module\\n File \\\""/usr/lib64/python2.7/json/__init__.py\\\"", line 339, in loads\\n return _default_decoder.decode(s)\\n File \\\""/usr/lib64/python2.7/json/decoder.py\\\"", line 364, in decode\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\n File \\\""/usr/lib64/python2.7/json/decoder.py\\\"", line 382, in raw_decode\\n raise ValueError(\\\""No JSON object could be decoded\\\"")\\nValueError: No JSON object could be decoded\\n\"", \""failed\"": 1, \""cmd\"": \""/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server\"", \""data\"": \""No handlers could be found for logger \\\""keystoneauth.identity.base\\\""\\n\\n{\\\""invocation\\\"": {\\\""module_args\\\"": {\\\""auth_type\\\"": null, \\\""availability_zone\\\"": null, \\\""image\\\"": \\\""rhel-6.5_jeos\\\"", \\\""image_exclude\\\"": \\\""(deprecated)\\\"", \\\""flavor_include\\\"": null, \\\""meta\\\"": null, \\\""flavor\\\"": \\\""m1.small\\\"", \\\""cloud\\\"": null, \\\""scheduler_hints\\\"": null, \\\""boot_from_volume\\\"": false, \\\""userdata\\\"": null, \\\""network\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""nics\\\"": [], \\\""floating_ips\\\"": null, \\\""flavor_ram\\\"": null, \\\""volume_size\\\"": false, \\\""state\\\"": \\\""present\\\"", \\\""auto_ip\\\"": true, \\\""security_groups\\\"": [\\\""default\\\""], \\\""config_drive\\\"": false, \\\""volumes\\\"": [], \\\""key_name\\\"": \\\""ci-factory\\\"", \\\""api_timeout\\\"": 99999, \\\""auth\\\"": {\\\""username\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""project_name\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""password\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""auth_url\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\""}, \\\""endpoint_type\\\"": \\\""public\\\"", \\\""boot_volume\\\"": null, \\\""key\\\"": null, \\\""cacert\\\"": null, \\\""wait\\\"": true, \\\""name\\\"": \\\""helloinstance\\\"", \\\""region_name\\\"": null, \\\""timeout\\\"": 180, \\\""cert\\\"": null, \\\""terminate_volume\\\"": false, \\\""verify\\\"": true, \\\""floating_ip_pools\\\"": null}}, \\\""openstack\\\"": {\\\""OS-EXT-STS:task_state\\\"": null, \\\""addresses\\\"": {\\\""e2e-openstack\\\"": [{\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""172.16.100.97\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""fixed\\\""}, {\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""10.8.183.233\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""floating\\\""}]}, \\\""image\\\"": {\\\""id\\\"": \\\""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\"", \\\""name\\\"": \\\""rhel-6.5_jeos\\\""}, \\\""OS-EXT-STS:vm_state\\\"": \\\""active\\\"", \\\""OS-SRV-USG:launched_at\\\"": \\\""2016-08-10T18:32:23.000000\\\"", \\\""NAME_ATTR\\\"": \\\""name\\\"", \\\""flavor\\\"": {\\\""id\\\"": \\\""2\\\"", \\\""name\\\"": \\\""m1.small\\\""}, \\\""az\\\"": \\\""nova\\\"", \\\""id\\\"": \\\""b66ba857-5740-4c13-9e70-191883398af8\\\"", \\\""cloud\\\"": \\\""defaults\\\"", \\\""user_id\\\"": \\\""9c770dbddda444799e627004fee26e0a\\\"", \\\""OS-DCF:diskConfig\\\"": \\\""MANUAL\\\"", \\\""networks\\\"": {\\\""e2e-openstack\\\"": [\\\""172.16.100.97\\\"", \\\""10.8.183.233\\\""]}, \\\""accessIPv4\\\"": \\\""10.8.183.233\\\"", \\\""accessIPv6\\\"": \\\""\\\"", \\\""security_groups\\\"": [{\\\""id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""name\\\"": \\\""default\\\"", \\\""security_group_rules\\\"": [{\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""ade9fcb9-14c1-4975-a04d-6007f80005c1\\\""}, {\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""d03e4bae-24b6-415a-a30c-ee0d060f566f\\\""}], \\\""description\\\"": \\\""Default security group\\\""}], \\\""key_name\\\"": \\\""ci-factory\\\"", \\\""progress\\\"": 0, \\\""OS-EXT-STS:power_state\\\"": 1, \\\""OS-EXT-AZ:availability_zone\\\"": \\\""nova\\\"", \\\""metadata\\\"": {}, \\\""status\\\"": \\\""ACTIVE\\\"", \\\""updated\\\"": \\\""2016-08-10T18:32:23Z\\\"", \\\""hostId\\\"": \\\""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\"", \\\""HUMAN_ID\\\"": true, \\\""OS-SRV-USG:terminated_at\\\"": null, \\\""public_v4\\\"": \\\""10.8.183.233\\\"", \\\""public_v6\\\"": \\\""\\\"", \\\""private_v4\\\"": \\\""172.16.100.97\\\"", \\\""interface_ip\\\"": \\\""10.8.183.233\\\"", \\\""name\\\"": \\\""helloinstance\\\"", \\\""created\\\"": \\\""2016-08-10T18:32:16Z\\\"", \\\""tenant_id\\\"": \\\""f1dda47890754241a3e111f9b7394707\\\"", \\\""region\\\"": \\\""\\\"", \\\""adminPass\\\"": \\\""tX4SZ3gV85Sb\\\"", \\\""os-extended-volumes:volumes_attached\\\"": [], \\\""volumes\\\"": [], \\\""config_drive\\\"": \\\""\\\"", \\\""human_id\\\"": \\\""helloinstance\\\""}, \\\""changed\\\"": true, \\\""id\\\"": \\\""b66ba857-5740-4c13-9e70-191883398af8\\\"", \\\""server\\\"": {\\\""OS-EXT-STS:task_state\\\"": null, \\\""addresses\\\"": {\\\""e2e-openstack\\\"": [{\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""172.16.100.97\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""fixed\\\""}, {\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""10.8.183.233\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""floating\\\""}]}, \\\""image\\\"": {\\\""id\\\"": \\\""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\"", \\\""name\\\"": \\\""rhel-6.5_jeos\\\""}, \\\""OS-EXT-STS:vm_state\\\"": \\\""active\\\"", \\\""OS-SRV-USG:launched_at\\\"": \\\""2016-08-10T18:32:23.000000\\\"", \\\""NAME_ATTR\\\"": \\\""name\\\"", \\\""flavor\\\"": {\\\""id\\\"": \\\""2\\\"", \\\""name\\\"": \\\""m1.small\\\""}, \\\""az\\\"": \\\""nova\\\"", \\\""id\\\"": \\\""b66ba857-5740-4c13-9e70-191883398af8\\\"", \\\""cloud\\\"": \\\""defaults\\\"", \\\""user_id\\\"": \\\""9c770dbddda444799e627004fee26e0a\\\"", \\\""OS-DCF:diskConfig\\\"": \\\""MANUAL\\\"", \\\""networks\\\"": {\\\""e2e-openstack\\\"": [\\\""172.16.100.97\\\"", \\\""10.8.183.233\\\""]}, \\\""accessIPv4\\\"": \\\""10.8.183.233\\\"", \\\""accessIPv6\\\"": \\\""\\\"", \\\""security_groups\\\"": [{\\\""id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""name\\\"": \\\""default\\\"", \\\""security_group_rules\\\"": [{\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""ade9fcb9-14c1-4975-a04d-6007f80005c1\\\""}, {\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""d03e4bae-24b6-415a-a30c-ee0d060f566f\\\""}], \\\""description\\\"": \\\""Default security group\\\""}], \\\""key_name\\\"": \\\""ci-factory\\\"", \\\""progress\\\"": 0, \\\""OS-EXT-STS:power_state\\\"": 1, \\\""OS-EXT-AZ:availability_zone\\\"": \\\""nova\\\"", \\\""metadata\\\"": {}, \\\""status\\\"": \\\""ACTIVE\\\"", \\\""updated\\\"": \\\""2016-08-10T18:32:23Z\\\"", \\\""hostId\\\"": \\\""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\"", \\\""HUMAN_ID\\\"": true, \\\""OS-SRV-USG:terminated_at\\\"": null, \\\""public_v4\\\"": \\\""10.8.183.233\\\"", \\\""public_v6\\\"": \\\""\\\"", \\\""private_v4\\\"": \\\""172.16.100.97\\\"", \\\""interface_ip\\\"": \\\""10.8.183.233\\\"", \\\""name\\\"": \\\""helloinstance\\\"", \\\""created\\\"": \\\""2016-08-10T18:32:16Z\\\"", \\\""tenant_id\\\"": \\\""f1dda47890754241a3e111f9b7394707\\\"", \\\""region\\\"": \\\""\\\"", \\\""adminPass\\\"": \\\""tX4SZ3gV85Sb\\\"", \\\""os-extended-volumes:volumes_attached\\\"": [], \\\""volumes\\\"": [], \\\""config_drive\\\"": \\\""\\\"", \\\""human_id\\\"": \\\""helloinstance\\\""}}\\n\"", \""ansible_job_id\"": \""501588893815.445\""}"", ""results_file"": ""/root/.ansible_async/501588893815.445"", ""started"": 1} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @pluck_os.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ```",True,"ansible os_server module doesnot work with async_status - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module : os_server and async_status ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Tried using os_server module in using async , its failing to handle , output of the job . ##### STEPS TO REPRODUCE Create a playbook to boot a single instance on openstack async ``` --- - name: test the os_server module on async hosts: localhost connection: local gather_facts: false tasks: - name: ""provision os_server resources"" os_server: state: ""present"" auth: auth_url: ""http://localhost:5000/v2.0/"" username: ""openstackusername"" password: ""openstackpassword"" project_name: ""openstackprojectname"" name: ""helloinstance"" image: ""rhel-6.5_jeos"" key_name: ""test_keypair"" api_timeout: 99999 flavor: ""m1.small"" network: ""testnetwork"" async: 1000 poll: 0 register: yum_sleeper - name: 'check on fire and forget task' async_status: jid: ""{{ yum_sleeper.ansible_job_id }}"" register: job_result until: job_result.finished retries: 30 ``` ##### EXPECTED RESULTS Expected , job output with openstack server details . ##### ACTUAL RESULTS https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e command: ``` ansible-playbook -vvvvvv pluck_os.yml ``` Actual output: https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e ``` Using /etc/ansible/ansible.cfg as config file [WARNING]: provided hosts list is empty, only localhost is available Loaded callback default of type stdout, v2.0 PLAYBOOK: pluck_os.yml ********************************************************* 1 plays in pluck_os.yml PLAY [test the os_server module on async] ************************************** TASK [provision/deprovision os_server resources by looping on count] *********** task path: /root/linch-pin/pluck_os.yml:7 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `"" && echo ansible-tmp-1470853932.03-182394085106578=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpF3aQ4d TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server <127.0.0.1> PUT /tmp/tmpQqJAgL TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper <127.0.0.1> EXEC /bin/sh -c 'chmod -R u+x /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper 501588893815 1000 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""ansible_job_id"": ""501588893815.445"", ""changed"": false, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""started"": 1} TASK [check on fire and forget task] ******************************************* task path: /root/linch-pin/pluck_os.yml:24 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `"" && echo ansible-tmp-1470853933.23-277604438370373=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpIkSVL9 TO /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (29 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 1, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `"" && echo ansible-tmp-1470853938.32-242588126297954=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmp4jd2U7 TO /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (28 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 2, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `"" && echo ansible-tmp-1470853943.42-153837274674906=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpchWAZ5 TO /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (27 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 3, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `"" && echo ansible-tmp-1470853948.51-150710275020038=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpTFyDJs TO /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/"" > /dev/null 2>&1 && sleep 0' FAILED - RETRYING: TASK: check on fire and forget task (26 retries left).Result was: {""ansible_job_id"": ""501588893815.445"", ""attempts"": 4, ""changed"": false, ""finished"": 0, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""results_file"": ""/root/.ansible_async/501588893815.445"", ""retries"": 30, ""started"": 1} <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `"" && echo ansible-tmp-1470853953.6-191645118737367=""` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpFrI9Bt TO /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""ansible_job_id"": ""501588893815.445"", ""changed"": false, ""failed"": true, ""finished"": 1, ""invocation"": {""module_args"": {""jid"": ""501588893815.445"", ""mode"": ""status""}, ""module_name"": ""async_status""}, ""msg"": ""Could not parse job output: No handlers could be found for logger \""keystoneauth.identity.base\""\n\n{\""invocation\"": {\""module_args\"": {\""auth_type\"": null, \""availability_zone\"": null, \""image\"": \""rhel-6.5_jeos\"", \""image_exclude\"": \""(deprecated)\"", \""flavor_include\"": null, \""meta\"": null, \""flavor\"": \""m1.small\"", \""cloud\"": null, \""scheduler_hints\"": null, \""boot_from_volume\"": false, \""userdata\"": null, \""network\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""nics\"": [], \""floating_ips\"": null, \""flavor_ram\"": null, \""volume_size\"": false, \""state\"": \""present\"", \""auto_ip\"": true, \""security_groups\"": [\""default\""], \""config_drive\"": false, \""volumes\"": [], \""key_name\"": \""ci-factory\"", \""api_timeout\"": 99999, \""auth\"": {\""username\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""project_name\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""password\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"", \""auth_url\"": \""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\""}, \""endpoint_type\"": \""public\"", \""boot_volume\"": null, \""key\"": null, \""cacert\"": null, \""wait\"": true, \""name\"": \""helloinstance\"", \""region_name\"": null, \""timeout\"": 180, \""cert\"": null, \""terminate_volume\"": false, \""verify\"": true, \""floating_ip_pools\"": null}}, \""openstack\"": {\""OS-EXT-STS:task_state\"": null, \""addresses\"": {\""e2e-openstack\"": [{\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""172.16.100.97\"", \""OS-EXT-IPS:type\"": \""fixed\""}, {\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""10.8.183.233\"", \""OS-EXT-IPS:type\"": \""floating\""}]}, \""image\"": {\""id\"": \""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\"", \""name\"": \""rhel-6.5_jeos\""}, \""OS-EXT-STS:vm_state\"": \""active\"", \""OS-SRV-USG:launched_at\"": \""2016-08-10T18:32:23.000000\"", \""NAME_ATTR\"": \""name\"", \""flavor\"": {\""id\"": \""2\"", \""name\"": \""m1.small\""}, \""az\"": \""nova\"", \""id\"": \""b66ba857-5740-4c13-9e70-191883398af8\"", \""cloud\"": \""defaults\"", \""user_id\"": \""9c770dbddda444799e627004fee26e0a\"", \""OS-DCF:diskConfig\"": \""MANUAL\"", \""networks\"": {\""e2e-openstack\"": [\""172.16.100.97\"", \""10.8.183.233\""]}, \""accessIPv4\"": \""10.8.183.233\"", \""accessIPv6\"": \""\"", \""security_groups\"": [{\""id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""name\"": \""default\"", \""security_group_rules\"": [{\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""ade9fcb9-14c1-4975-a04d-6007f80005c1\""}, {\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""d03e4bae-24b6-415a-a30c-ee0d060f566f\""}], \""description\"": \""Default security group\""}], \""key_name\"": \""ci-factory\"", \""progress\"": 0, \""OS-EXT-STS:power_state\"": 1, \""OS-EXT-AZ:availability_zone\"": \""nova\"", \""metadata\"": {}, \""status\"": \""ACTIVE\"", \""updated\"": \""2016-08-10T18:32:23Z\"", \""hostId\"": \""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\"", \""HUMAN_ID\"": true, \""OS-SRV-USG:terminated_at\"": null, \""public_v4\"": \""10.8.183.233\"", \""public_v6\"": \""\"", \""private_v4\"": \""172.16.100.97\"", \""interface_ip\"": \""10.8.183.233\"", \""name\"": \""helloinstance\"", \""created\"": \""2016-08-10T18:32:16Z\"", \""tenant_id\"": \""f1dda47890754241a3e111f9b7394707\"", \""region\"": \""\"", \""adminPass\"": \""tX4SZ3gV85Sb\"", \""os-extended-volumes:volumes_attached\"": [], \""volumes\"": [], \""config_drive\"": \""\"", \""human_id\"": \""helloinstance\""}, \""changed\"": true, \""id\"": \""b66ba857-5740-4c13-9e70-191883398af8\"", \""server\"": {\""OS-EXT-STS:task_state\"": null, \""addresses\"": {\""e2e-openstack\"": [{\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""172.16.100.97\"", \""OS-EXT-IPS:type\"": \""fixed\""}, {\""OS-EXT-IPS-MAC:mac_addr\"": \""fa:16:3e:11:38:38\"", \""version\"": 4, \""addr\"": \""10.8.183.233\"", \""OS-EXT-IPS:type\"": \""floating\""}]}, \""image\"": {\""id\"": \""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\"", \""name\"": \""rhel-6.5_jeos\""}, \""OS-EXT-STS:vm_state\"": \""active\"", \""OS-SRV-USG:launched_at\"": \""2016-08-10T18:32:23.000000\"", \""NAME_ATTR\"": \""name\"", \""flavor\"": {\""id\"": \""2\"", \""name\"": \""m1.small\""}, \""az\"": \""nova\"", \""id\"": \""b66ba857-5740-4c13-9e70-191883398af8\"", \""cloud\"": \""defaults\"", \""user_id\"": \""9c770dbddda444799e627004fee26e0a\"", \""OS-DCF:diskConfig\"": \""MANUAL\"", \""networks\"": {\""e2e-openstack\"": [\""172.16.100.97\"", \""10.8.183.233\""]}, \""accessIPv4\"": \""10.8.183.233\"", \""accessIPv6\"": \""\"", \""security_groups\"": [{\""id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""name\"": \""default\"", \""security_group_rules\"": [{\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""ade9fcb9-14c1-4975-a04d-6007f80005c1\""}, {\""direction\"": \""ingress\"", \""protocol\"": null, \""remote_ip_prefix\"": null, \""port_range_max\"": null, \""security_group_id\"": \""df1a797b-009c-4685-a7c9-43863c36d653\"", \""port_range_min\"": null, \""ethertype\"": \""IPv4\"", \""id\"": \""d03e4bae-24b6-415a-a30c-ee0d060f566f\""}], \""description\"": \""Default security group\""}], \""key_name\"": \""ci-factory\"", \""progress\"": 0, \""OS-EXT-STS:power_state\"": 1, \""OS-EXT-AZ:availability_zone\"": \""nova\"", \""metadata\"": {}, \""status\"": \""ACTIVE\"", \""updated\"": \""2016-08-10T18:32:23Z\"", \""hostId\"": \""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\"", \""HUMAN_ID\"": true, \""OS-SRV-USG:terminated_at\"": null, \""public_v4\"": \""10.8.183.233\"", \""public_v6\"": \""\"", \""private_v4\"": \""172.16.100.97\"", \""interface_ip\"": \""10.8.183.233\"", \""name\"": \""helloinstance\"", \""created\"": \""2016-08-10T18:32:16Z\"", \""tenant_id\"": \""f1dda47890754241a3e111f9b7394707\"", \""region\"": \""\"", \""adminPass\"": \""tX4SZ3gV85Sb\"", \""os-extended-volumes:volumes_attached\"": [], \""volumes\"": [], \""config_drive\"": \""\"", \""human_id\"": \""helloinstance\""}}\n{\""msg\"": \""Traceback (most recent call last):\\n File \\\""/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper\\\"", line 89, in _run_module\\n File \\\""/usr/lib64/python2.7/json/__init__.py\\\"", line 339, in loads\\n return _default_decoder.decode(s)\\n File \\\""/usr/lib64/python2.7/json/decoder.py\\\"", line 364, in decode\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\n File \\\""/usr/lib64/python2.7/json/decoder.py\\\"", line 382, in raw_decode\\n raise ValueError(\\\""No JSON object could be decoded\\\"")\\nValueError: No JSON object could be decoded\\n\"", \""failed\"": 1, \""cmd\"": \""/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server\"", \""data\"": \""No handlers could be found for logger \\\""keystoneauth.identity.base\\\""\\n\\n{\\\""invocation\\\"": {\\\""module_args\\\"": {\\\""auth_type\\\"": null, \\\""availability_zone\\\"": null, \\\""image\\\"": \\\""rhel-6.5_jeos\\\"", \\\""image_exclude\\\"": \\\""(deprecated)\\\"", \\\""flavor_include\\\"": null, \\\""meta\\\"": null, \\\""flavor\\\"": \\\""m1.small\\\"", \\\""cloud\\\"": null, \\\""scheduler_hints\\\"": null, \\\""boot_from_volume\\\"": false, \\\""userdata\\\"": null, \\\""network\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""nics\\\"": [], \\\""floating_ips\\\"": null, \\\""flavor_ram\\\"": null, \\\""volume_size\\\"": false, \\\""state\\\"": \\\""present\\\"", \\\""auto_ip\\\"": true, \\\""security_groups\\\"": [\\\""default\\\""], \\\""config_drive\\\"": false, \\\""volumes\\\"": [], \\\""key_name\\\"": \\\""ci-factory\\\"", \\\""api_timeout\\\"": 99999, \\\""auth\\\"": {\\\""username\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""project_name\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""password\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"", \\\""auth_url\\\"": \\\""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\""}, \\\""endpoint_type\\\"": \\\""public\\\"", \\\""boot_volume\\\"": null, \\\""key\\\"": null, \\\""cacert\\\"": null, \\\""wait\\\"": true, \\\""name\\\"": \\\""helloinstance\\\"", \\\""region_name\\\"": null, \\\""timeout\\\"": 180, \\\""cert\\\"": null, \\\""terminate_volume\\\"": false, \\\""verify\\\"": true, \\\""floating_ip_pools\\\"": null}}, \\\""openstack\\\"": {\\\""OS-EXT-STS:task_state\\\"": null, \\\""addresses\\\"": {\\\""e2e-openstack\\\"": [{\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""172.16.100.97\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""fixed\\\""}, {\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""10.8.183.233\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""floating\\\""}]}, \\\""image\\\"": {\\\""id\\\"": \\\""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\"", \\\""name\\\"": \\\""rhel-6.5_jeos\\\""}, \\\""OS-EXT-STS:vm_state\\\"": \\\""active\\\"", \\\""OS-SRV-USG:launched_at\\\"": \\\""2016-08-10T18:32:23.000000\\\"", \\\""NAME_ATTR\\\"": \\\""name\\\"", \\\""flavor\\\"": {\\\""id\\\"": \\\""2\\\"", \\\""name\\\"": \\\""m1.small\\\""}, \\\""az\\\"": \\\""nova\\\"", \\\""id\\\"": \\\""b66ba857-5740-4c13-9e70-191883398af8\\\"", \\\""cloud\\\"": \\\""defaults\\\"", \\\""user_id\\\"": \\\""9c770dbddda444799e627004fee26e0a\\\"", \\\""OS-DCF:diskConfig\\\"": \\\""MANUAL\\\"", \\\""networks\\\"": {\\\""e2e-openstack\\\"": [\\\""172.16.100.97\\\"", \\\""10.8.183.233\\\""]}, \\\""accessIPv4\\\"": \\\""10.8.183.233\\\"", \\\""accessIPv6\\\"": \\\""\\\"", \\\""security_groups\\\"": [{\\\""id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""name\\\"": \\\""default\\\"", \\\""security_group_rules\\\"": [{\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""ade9fcb9-14c1-4975-a04d-6007f80005c1\\\""}, {\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""d03e4bae-24b6-415a-a30c-ee0d060f566f\\\""}], \\\""description\\\"": \\\""Default security group\\\""}], \\\""key_name\\\"": \\\""ci-factory\\\"", \\\""progress\\\"": 0, \\\""OS-EXT-STS:power_state\\\"": 1, \\\""OS-EXT-AZ:availability_zone\\\"": \\\""nova\\\"", \\\""metadata\\\"": {}, \\\""status\\\"": \\\""ACTIVE\\\"", \\\""updated\\\"": \\\""2016-08-10T18:32:23Z\\\"", \\\""hostId\\\"": \\\""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\"", \\\""HUMAN_ID\\\"": true, \\\""OS-SRV-USG:terminated_at\\\"": null, \\\""public_v4\\\"": \\\""10.8.183.233\\\"", \\\""public_v6\\\"": \\\""\\\"", \\\""private_v4\\\"": \\\""172.16.100.97\\\"", \\\""interface_ip\\\"": \\\""10.8.183.233\\\"", \\\""name\\\"": \\\""helloinstance\\\"", \\\""created\\\"": \\\""2016-08-10T18:32:16Z\\\"", \\\""tenant_id\\\"": \\\""f1dda47890754241a3e111f9b7394707\\\"", \\\""region\\\"": \\\""\\\"", \\\""adminPass\\\"": \\\""tX4SZ3gV85Sb\\\"", \\\""os-extended-volumes:volumes_attached\\\"": [], \\\""volumes\\\"": [], \\\""config_drive\\\"": \\\""\\\"", \\\""human_id\\\"": \\\""helloinstance\\\""}, \\\""changed\\\"": true, \\\""id\\\"": \\\""b66ba857-5740-4c13-9e70-191883398af8\\\"", \\\""server\\\"": {\\\""OS-EXT-STS:task_state\\\"": null, \\\""addresses\\\"": {\\\""e2e-openstack\\\"": [{\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""172.16.100.97\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""fixed\\\""}, {\\\""OS-EXT-IPS-MAC:mac_addr\\\"": \\\""fa:16:3e:11:38:38\\\"", \\\""version\\\"": 4, \\\""addr\\\"": \\\""10.8.183.233\\\"", \\\""OS-EXT-IPS:type\\\"": \\\""floating\\\""}]}, \\\""image\\\"": {\\\""id\\\"": \\\""3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\"", \\\""name\\\"": \\\""rhel-6.5_jeos\\\""}, \\\""OS-EXT-STS:vm_state\\\"": \\\""active\\\"", \\\""OS-SRV-USG:launched_at\\\"": \\\""2016-08-10T18:32:23.000000\\\"", \\\""NAME_ATTR\\\"": \\\""name\\\"", \\\""flavor\\\"": {\\\""id\\\"": \\\""2\\\"", \\\""name\\\"": \\\""m1.small\\\""}, \\\""az\\\"": \\\""nova\\\"", \\\""id\\\"": \\\""b66ba857-5740-4c13-9e70-191883398af8\\\"", \\\""cloud\\\"": \\\""defaults\\\"", \\\""user_id\\\"": \\\""9c770dbddda444799e627004fee26e0a\\\"", \\\""OS-DCF:diskConfig\\\"": \\\""MANUAL\\\"", \\\""networks\\\"": {\\\""e2e-openstack\\\"": [\\\""172.16.100.97\\\"", \\\""10.8.183.233\\\""]}, \\\""accessIPv4\\\"": \\\""10.8.183.233\\\"", \\\""accessIPv6\\\"": \\\""\\\"", \\\""security_groups\\\"": [{\\\""id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""name\\\"": \\\""default\\\"", \\\""security_group_rules\\\"": [{\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""ade9fcb9-14c1-4975-a04d-6007f80005c1\\\""}, {\\\""direction\\\"": \\\""ingress\\\"", \\\""protocol\\\"": null, \\\""remote_ip_prefix\\\"": null, \\\""port_range_max\\\"": null, \\\""security_group_id\\\"": \\\""df1a797b-009c-4685-a7c9-43863c36d653\\\"", \\\""port_range_min\\\"": null, \\\""ethertype\\\"": \\\""IPv4\\\"", \\\""id\\\"": \\\""d03e4bae-24b6-415a-a30c-ee0d060f566f\\\""}], \\\""description\\\"": \\\""Default security group\\\""}], \\\""key_name\\\"": \\\""ci-factory\\\"", \\\""progress\\\"": 0, \\\""OS-EXT-STS:power_state\\\"": 1, \\\""OS-EXT-AZ:availability_zone\\\"": \\\""nova\\\"", \\\""metadata\\\"": {}, \\\""status\\\"": \\\""ACTIVE\\\"", \\\""updated\\\"": \\\""2016-08-10T18:32:23Z\\\"", \\\""hostId\\\"": \\\""be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\"", \\\""HUMAN_ID\\\"": true, \\\""OS-SRV-USG:terminated_at\\\"": null, \\\""public_v4\\\"": \\\""10.8.183.233\\\"", \\\""public_v6\\\"": \\\""\\\"", \\\""private_v4\\\"": \\\""172.16.100.97\\\"", \\\""interface_ip\\\"": \\\""10.8.183.233\\\"", \\\""name\\\"": \\\""helloinstance\\\"", \\\""created\\\"": \\\""2016-08-10T18:32:16Z\\\"", \\\""tenant_id\\\"": \\\""f1dda47890754241a3e111f9b7394707\\\"", \\\""region\\\"": \\\""\\\"", \\\""adminPass\\\"": \\\""tX4SZ3gV85Sb\\\"", \\\""os-extended-volumes:volumes_attached\\\"": [], \\\""volumes\\\"": [], \\\""config_drive\\\"": \\\""\\\"", \\\""human_id\\\"": \\\""helloinstance\\\""}}\\n\"", \""ansible_job_id\"": \""501588893815.445\""}"", ""results_file"": ""/root/.ansible_async/501588893815.445"", ""started"": 1} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @pluck_os.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ```",1,ansible os server module doesnot work with async status issue type bug report component name module os server and async status ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary tried using os server module in using async its failing to handle output of the job steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create a playbook to boot a single instance on openstack async name test the os server module on async hosts localhost connection local gather facts false tasks name provision os server resources os server state present auth auth url username openstackusername password openstackpassword project name openstackprojectname name helloinstance image rhel jeos key name test keypair api timeout flavor small network testnetwork async poll register yum sleeper name check on fire and forget task async status jid yum sleeper ansible job id register job result until job result finished retries expected results expected job output with openstack server details actual results command ansible playbook vvvvvv pluck os yml actual output using etc ansible ansible cfg as config file provided hosts list is empty only localhost is available loaded callback default of type stdout playbook pluck os yml plays in pluck os yml play task task path root linch pin pluck os yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp os server put tmp tmpqqjagl to root ansible tmp ansible tmp async wrapper exec bin sh c chmod r u x root ansible tmp ansible tmp sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf root ansible tmp ansible tmp async wrapper root ansible tmp ansible tmp os server sleep exec bin sh c rm f r root ansible tmp ansible tmp dev null sleep ok ansible job id changed false results file root ansible async started task task path root linch pin pluck os yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmptfydjs to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep fatal failed ansible job id changed false failed true finished invocation module args jid mode status module name async status msg could not parse job output no handlers could be found for logger keystoneauth identity base n n invocation module args auth type null availability zone null image rhel jeos image exclude deprecated flavor include null meta null flavor small cloud null scheduler hints null boot from volume false userdata null network value specified in no log parameter nics floating ips null flavor ram null volume size false state present auto ip true security groups config drive false volumes key name ci factory api timeout auth username value specified in no log parameter project name value specified in no log parameter password value specified in no log parameter auth url value specified in no log parameter endpoint type public boot volume null key null cacert null wait true name helloinstance region name null timeout cert null terminate volume false verify true floating ip pools null openstack os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance changed true id server os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance n msg traceback most recent call last n file root ansible tmp ansible tmp async wrapper line in run module n file usr json init py line in loads n return default decoder decode s n file usr json decoder py line in decode n obj end self raw decode s idx w s end n file usr json decoder py line in raw decode n raise valueerror no json object could be decoded nvalueerror no json object could be decoded n failed cmd root ansible tmp ansible tmp os server data no handlers could be found for logger keystoneauth identity base n n invocation module args auth type null availability zone null image rhel jeos image exclude deprecated flavor include null meta null flavor small cloud null scheduler hints null boot from volume false userdata null network value specified in no log parameter nics floating ips null flavor ram null volume size false state present auto ip true security groups config drive false volumes key name ci factory api timeout auth username value specified in no log parameter project name value specified in no log parameter password value specified in no log parameter auth url value specified in no log parameter endpoint type public boot volume null key null cacert null wait true name helloinstance region name null timeout cert null terminate volume false verify true floating ip pools null openstack os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance changed true id server os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance n ansible job id results file root ansible async started no more hosts left to retry use limit pluck os retry play recap localhost ok changed unreachable failed ,1 1775,6575800590.0,IssuesEvent,2017-09-11 17:22:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Wrong requirements for docker_container module,affects_2.1 cloud docker docs_report waiting_on_maintainer," ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION On remote host (akl0mw643) docker-py 1.7.2 is installed. ##### OS / ENVIRONMENT Ansible side: RHEL 7.2 Remote host: RHEL 7.2 with docker-py-1.7.2 library ##### SUMMARY In case of any limits update (cpu__, cpuset__) , a docker_container module calls docker-py API function with name ""container_update"". This function was introduced in docker-py-1.8.0 for the first time. So requirements should be ""docker-py >= 1.8.0"". ##### STEPS TO REPRODUCE ``` docker_container: name: docker-py-test image: test cpu_quota: ""1000"" hostname: dockertest Or one of others limits: cpuset_cpus: ""0,1"" cpuset_mems: ""0,1"" cpu_shares: 100 cpu_period: 1000 ``` ##### EXPECTED RESULTS TASK [Ensure that X containers are started] ********************************* changed: [akl0mw643] PLAY RECAP ********************************************************************* akl0mw643 : ok=2 changed=1 unreachable=0 failed=0 ##### ACTUAL RESULTS ``` fatal: [akl0mw643]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""docker_container""}, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1928, in \r\n main()\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1921, in main\r\n cm = ContainerManager(client)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1583, in __init__\r\n self.present(state)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1626, in present\r\n container = self.update_limits(container)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1687, in update_limits\r\n self.container_update(container.Id, self.parameters.update_parameters)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1804, in container_update\r\n if not self.check_mode and callable(getattr(self.client, 'update_container')):\r\nAttributeError: 'AnsibleDockerClient' object has no attribute 'update_container'\r\n"", ""msg"": ""MODULE FAILURE""} ``` ",True,"Wrong requirements for docker_container module - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION On remote host (akl0mw643) docker-py 1.7.2 is installed. ##### OS / ENVIRONMENT Ansible side: RHEL 7.2 Remote host: RHEL 7.2 with docker-py-1.7.2 library ##### SUMMARY In case of any limits update (cpu__, cpuset__) , a docker_container module calls docker-py API function with name ""container_update"". This function was introduced in docker-py-1.8.0 for the first time. So requirements should be ""docker-py >= 1.8.0"". ##### STEPS TO REPRODUCE ``` docker_container: name: docker-py-test image: test cpu_quota: ""1000"" hostname: dockertest Or one of others limits: cpuset_cpus: ""0,1"" cpuset_mems: ""0,1"" cpu_shares: 100 cpu_period: 1000 ``` ##### EXPECTED RESULTS TASK [Ensure that X containers are started] ********************************* changed: [akl0mw643] PLAY RECAP ********************************************************************* akl0mw643 : ok=2 changed=1 unreachable=0 failed=0 ##### ACTUAL RESULTS ``` fatal: [akl0mw643]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""docker_container""}, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1928, in \r\n main()\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1921, in main\r\n cm = ContainerManager(client)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1583, in __init__\r\n self.present(state)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1626, in present\r\n container = self.update_limits(container)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1687, in update_limits\r\n self.container_update(container.Id, self.parameters.update_parameters)\r\n File \""/tmp/ansible_i94B2c/ansible_module_docker_container.py\"", line 1804, in container_update\r\n if not self.check_mode and callable(getattr(self.client, 'update_container')):\r\nAttributeError: 'AnsibleDockerClient' object has no attribute 'update_container'\r\n"", ""msg"": ""MODULE FAILURE""} ``` ",1,wrong requirements for docker container module issue type documentation report component name docker container ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables on remote host docker py is installed os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible side rhel remote host rhel with docker py library summary in case of any limits update cpu cpuset a docker container module calls docker py api function with name container update this function was introduced in docker py for the first time so requirements should be docker py steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used docker container name docker py test image test cpu quota hostname dockertest or one of others limits cpuset cpus cpuset mems cpu shares cpu period expected results task changed play recap ok changed unreachable failed actual results fatal failed changed false failed true invocation module name docker container module stderr module stdout traceback most recent call last r n file tmp ansible ansible module docker container py line in r n main r n file tmp ansible ansible module docker container py line in main r n cm containermanager client r n file tmp ansible ansible module docker container py line in init r n self present state r n file tmp ansible ansible module docker container py line in present r n container self update limits container r n file tmp ansible ansible module docker container py line in update limits r n self container update container id self parameters update parameters r n file tmp ansible ansible module docker container py line in container update r n if not self check mode and callable getattr self client update container r nattributeerror ansibledockerclient object has no attribute update container r n msg module failure ,1 982,4746549521.0,IssuesEvent,2016-10-21 11:38:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Feature Idea : cloudformation ""describe"" state",affects_2.0 aws cloud feature_idea waiting_on_maintainer,"What do people thing about ""describe"" or ""available"" state for cloudformation module. I'm happy to add it in and send pull req I just want to make sure it will get accepted before I start it :) **Use case:** build stack in separate playbook/ansible run, and then I would like to use it like ```YML - name: Get MyStack tier cloudformation: aws_access_key: ""{{ aws.access_key }}"" aws_secret_key: ""{{ aws.secret_key }}"" stack_name: my-stack-name state: available register: mystack_data - name: use {{ mystack_data.outputs.* }} in some awesome ways ``` I really don't want to use lookup plugins, as it's a bit ugly and limited to specify aws secrets and keys per run. Also I need to run lookup for each stack output.",True,"Feature Idea : cloudformation ""describe"" state - What do people thing about ""describe"" or ""available"" state for cloudformation module. I'm happy to add it in and send pull req I just want to make sure it will get accepted before I start it :) **Use case:** build stack in separate playbook/ansible run, and then I would like to use it like ```YML - name: Get MyStack tier cloudformation: aws_access_key: ""{{ aws.access_key }}"" aws_secret_key: ""{{ aws.secret_key }}"" stack_name: my-stack-name state: available register: mystack_data - name: use {{ mystack_data.outputs.* }} in some awesome ways ``` I really don't want to use lookup plugins, as it's a bit ugly and limited to specify aws secrets and keys per run. Also I need to run lookup for each stack output.",1,feature idea cloudformation describe state what do people thing about describe or available state for cloudformation module i m happy to add it in and send pull req i just want to make sure it will get accepted before i start it use case build stack in separate playbook ansible run and then i would like to use it like yml name get mystack tier cloudformation aws access key aws access key aws secret key aws secret key stack name my stack name state available register mystack data name use mystack data outputs in some awesome ways i really don t want to use lookup plugins as it s a bit ugly and limited to specify aws secrets and keys per run also i need to run lookup for each stack output ,1 1897,6577544634.0,IssuesEvent,2017-09-12 01:39:31,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 wait parameter incorrectly claims to only support 'running' state,affects_2.3 aws cloud docs_report waiting_on_maintainer,"##### Issue Type: - Documentation Report ##### Plugin Name: ec2 ##### Ansible Version: N/A ##### Summary: Current documentation for wait parameter says: > wait for the instance to be 'running' before returning. Source: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2.py#L105 The actual code supports waiting for any state. https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2.py#L1308 The actual check is on line 1315 ``` python if i.state == state: ``` ##### Steps To Reproduce: Use documentation as reference when trying to stop an instance and wait for it to stop. ##### Expected Results: Documentation correctly states that it will wait for instance to be in running or stopped state. ##### Actual Results: Documentation states and implies that waiting for stopped state is not supported. Reproduced independently by 2 developers in my team, myself included. ",True,"ec2 wait parameter incorrectly claims to only support 'running' state - ##### Issue Type: - Documentation Report ##### Plugin Name: ec2 ##### Ansible Version: N/A ##### Summary: Current documentation for wait parameter says: > wait for the instance to be 'running' before returning. Source: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2.py#L105 The actual code supports waiting for any state. https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2.py#L1308 The actual check is on line 1315 ``` python if i.state == state: ``` ##### Steps To Reproduce: Use documentation as reference when trying to stop an instance and wait for it to stop. ##### Expected Results: Documentation correctly states that it will wait for instance to be in running or stopped state. ##### Actual Results: Documentation states and implies that waiting for stopped state is not supported. Reproduced independently by 2 developers in my team, myself included. ",1, wait parameter incorrectly claims to only support running state issue type documentation report plugin name ansible version n a summary current documentation for wait parameter says wait for the instance to be running before returning source the actual code supports waiting for any state the actual check is on line python if i state state steps to reproduce use documentation as reference when trying to stop an instance and wait for it to stop expected results documentation correctly states that it will wait for instance to be in running or stopped state actual results documentation states and implies that waiting for stopped state is not supported reproduced independently by developers in my team myself included ,1 1735,6574863741.0,IssuesEvent,2017-09-11 14:19:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,shell.py has \r instead of \n ?,affects_2.3 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible/module_utils/shell.py ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel c064dce791) last updated 2016/10/19 10:54:36 (GMT -400) lib/ansible/modules/core: (detached HEAD b59b5d36e0) last updated 2016/10/19 10:54:36 (GMT -400) lib/ansible/modules/extras: (detached HEAD 3f77bb6857) last updated 2016/10/18 11:43:45 (GMT -400) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I noticed that shell.py has carriage-return (\r) in send strings instead of newline in some places, I'm not sure if there was a particular reason for this? After looking deeper, it does appear that it is allocating a PTY by default in paramiko, not sure if that makes this point somewhat mute. Normal convention is to ""send"" \n as far as I have experienced in various expect scripts over the years? I guess I'm just curious if there was an implementation detail why CR was used. Below is a diff that I implemented, it appears to work as expected in my test environments. I can submit PR if this fix is desired. ``` --- a/lib/ansible/module_utils/shell.py +++ b/lib/ansible/module_utils/shell.py @@ -152,7 +152,7 @@ class Shell(object): responses = list() try: for command in to_list(commands): - cmd = '%s\r' % str(command) + cmd = '%s\n' % str(command) self.shell.sendall(cmd) responses.append(self.receive(command)) except socket.timeout: @@ -172,7 +172,7 @@ class Shell(object): for pr, ans in zip(prompt, response): match = pr.search(resp) if match: - answer = '%s\r' % ans + answer = '%s\n' % ans self.shell.sendall(answer) return True ``` ",True,"shell.py has \r instead of \n ? - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible/module_utils/shell.py ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel c064dce791) last updated 2016/10/19 10:54:36 (GMT -400) lib/ansible/modules/core: (detached HEAD b59b5d36e0) last updated 2016/10/19 10:54:36 (GMT -400) lib/ansible/modules/extras: (detached HEAD 3f77bb6857) last updated 2016/10/18 11:43:45 (GMT -400) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I noticed that shell.py has carriage-return (\r) in send strings instead of newline in some places, I'm not sure if there was a particular reason for this? After looking deeper, it does appear that it is allocating a PTY by default in paramiko, not sure if that makes this point somewhat mute. Normal convention is to ""send"" \n as far as I have experienced in various expect scripts over the years? I guess I'm just curious if there was an implementation detail why CR was used. Below is a diff that I implemented, it appears to work as expected in my test environments. I can submit PR if this fix is desired. ``` --- a/lib/ansible/module_utils/shell.py +++ b/lib/ansible/module_utils/shell.py @@ -152,7 +152,7 @@ class Shell(object): responses = list() try: for command in to_list(commands): - cmd = '%s\r' % str(command) + cmd = '%s\n' % str(command) self.shell.sendall(cmd) responses.append(self.receive(command)) except socket.timeout: @@ -172,7 +172,7 @@ class Shell(object): for pr, ans in zip(prompt, response): match = pr.search(resp) if match: - answer = '%s\r' % ans + answer = '%s\n' % ans self.shell.sendall(answer) return True ``` ",1,shell py has r instead of n issue type bug report component name ansible module utils shell py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary i noticed that shell py has carriage return r in send strings instead of newline in some places i m not sure if there was a particular reason for this after looking deeper it does appear that it is allocating a pty by default in paramiko not sure if that makes this point somewhat mute normal convention is to send n as far as i have experienced in various expect scripts over the years i guess i m just curious if there was an implementation detail why cr was used below is a diff that i implemented it appears to work as expected in my test environments i can submit pr if this fix is desired a lib ansible module utils shell py b lib ansible module utils shell py class shell object responses list try for command in to list commands cmd s r str command cmd s n str command self shell sendall cmd responses append self receive command except socket timeout class shell object for pr ans in zip prompt response match pr search resp if match answer s r ans answer s n ans self shell sendall answer return true ,1 1068,4889235855.0,IssuesEvent,2016-11-18 09:31:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,include_role does not support tags,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Linux, Red Hat Enterprise 7.2 ##### SUMMARY Originally filed this issue here because I was told on IRC this was part of ansible because it was a action plugin. The bot told me i was wrong after a few days, so now it's here. https://github.com/ansible/ansible/issues/17761 Forgive my terminology. I will do my best to describe that issue I am seeing. I am using a pre-release version of Ansible 2.2 for this. When using include_role module with tags, the include_role seems to ignore the tags and does not apply them when being called. I am using include_role in my playbook in the task list. I apply a tag on it. When I call ansible-playbook -t tagname it does not run my include_role task. ##### STEPS TO REPRODUCE ``` --- - name: test playbook hosts: localhost tasks: - debug: msg: ""hello from debug"" tags: - test - include_role: name: role1 tags: - test ``` OUTPUT ``` $ ansible-playbook -t test test.yml PLAY [test playbook] *********************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [debug] ******************************************************************* ok: [localhost] => { ""msg"": ""hello from debug"" } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` ##### EXPECTED RESULTS I would expect it to run the role1, because it was called with the tags ##### ACTUAL RESULTS role1 was never called ",True,"include_role does not support tags - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Linux, Red Hat Enterprise 7.2 ##### SUMMARY Originally filed this issue here because I was told on IRC this was part of ansible because it was a action plugin. The bot told me i was wrong after a few days, so now it's here. https://github.com/ansible/ansible/issues/17761 Forgive my terminology. I will do my best to describe that issue I am seeing. I am using a pre-release version of Ansible 2.2 for this. When using include_role module with tags, the include_role seems to ignore the tags and does not apply them when being called. I am using include_role in my playbook in the task list. I apply a tag on it. When I call ansible-playbook -t tagname it does not run my include_role task. ##### STEPS TO REPRODUCE ``` --- - name: test playbook hosts: localhost tasks: - debug: msg: ""hello from debug"" tags: - test - include_role: name: role1 tags: - test ``` OUTPUT ``` $ ansible-playbook -t test test.yml PLAY [test playbook] *********************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [debug] ******************************************************************* ok: [localhost] => { ""msg"": ""hello from debug"" } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` ##### EXPECTED RESULTS I would expect it to run the role1, because it was called with the tags ##### ACTUAL RESULTS role1 was never called ",1,include role does not support tags issue type bug report component name include role ansible version ansible configuration n a os environment linux red hat enterprise summary originally filed this issue here because i was told on irc this was part of ansible because it was a action plugin the bot told me i was wrong after a few days so now it s here forgive my terminology i will do my best to describe that issue i am seeing i am using a pre release version of ansible for this when using include role module with tags the include role seems to ignore the tags and does not apply them when being called i am using include role in my playbook in the task list i apply a tag on it when i call ansible playbook t tagname it does not run my include role task steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name test playbook hosts localhost tasks debug msg hello from debug tags test include role name tags test output ansible playbook t test test yml play task ok task ok msg hello from debug play recap localhost ok changed unreachable failed expected results i would expect it to run the because it was called with the tags actual results was never called ,1 1710,6574448708.0,IssuesEvent,2017-09-11 12:56:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,nxos_command fails with cisco nexus version 7.x,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Cisco Nexus Software BIOS: version 3.9.0 NXOS: version 7.0(3)I4(1) BIOS compile time: 01/25/2016 NXOS image file is: bootflash:///nxos.7.0.3.I4.1.bin NXOS compile time: 5/15/2016 20:00:00 [05/15/2016 20:24:30] Hardware cisco Nexus3000 C3132Q Chassis Intel(R) Core(TM) i3-3227U CPU @ 2.50GHz with 8153476 kB of memory. Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY issue command to delete file from boothflash. This same command with fine on Cisoc Nexus U6.6 ##### STEPS TO REPRODUCE ``` --- - name: Delete remote config nxos_command: provider: ""{{ cli }}"" host: ""{{ ansible_host }}"" commands: - ""delete bootflash:{{ inventory_hostname }}.conf "" ``` ##### EXPECTED RESULTS delete file from bootflash ##### ACTUAL RESULTS ``` task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/roles/copyfirmware/tasks/main.yml:2 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.10.229.140> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.229.140> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641 `"" && echo ansible-tmp-1478227857.81-191488415490641=""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641 `"" ) && sleep 0' Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.10.230.12> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.230.12> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391 `"" && echo ansible-tmp-1478227857.81-248162808087391=""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391 `"" ) && sleep 0' <10.10.230.12> PUT /tmp/tmpRHdIQN TO /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/nxos_command.py <10.10.229.140> PUT /tmp/tmpIoWDhn TO /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/nxos_command.py <10.10.229.140> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/ /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/nxos_command.py && sleep 0' <10.10.230.12> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/ /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/nxos_command.py && sleep 0' <10.10.229.140> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/nxos_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/"" > /dev/null 2>&1 && sleep 0' <10.10.230.12> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/nxos_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_o26AcW/ansible_module_nxos_command.py"", line 257, in main() File ""/tmp/ansible_o26AcW/ansible_module_nxos_command.py"", line 238, in main module.fail_json(msg=str(exc), **exc.kwargs) File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 1811, in fail_json File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in remove_values File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 386, in remove_values File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 399, in remove_values TypeError: Value of unknown type: , delete bootflash:rr1-n35-r09-x32sp-1a.conf fatal: [rr1-n35-r09-x32sp-1a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_o26AcW/ansible_module_nxos_command.py\"", line 257, in \n main()\n File \""/tmp/ansible_o26AcW/ansible_module_nxos_command.py\"", line 238, in main\n module.fail_json(msg=str(exc), **exc.kwargs)\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1811, in fail_json\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in remove_values\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in \n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 386, in remove_values\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 399, in remove_values\nTypeError: Value of unknown type: , delete bootflash:rr1-n35-r09-x32sp-1a.conf \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_t5uacC/ansible_module_nxos_command.py"", line 257, in main() File ""/tmp/ansible_t5uacC/ansible_module_nxos_command.py"", line 238, in main module.fail_json(msg=str(exc), **exc.kwargs) File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 1811, in fail_json File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in remove_values File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 386, in remove_values File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 399, in remove_values TypeError: Value of unknown type: , delete bootflash:rr1-n35-r10-x32sp-2a.conf fatal: [rr1-n35-r10-x32sp-2a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_t5uacC/ansible_module_nxos_command.py\"", line 257, in \n main()\n File \""/tmp/ansible_t5uacC/ansible_module_nxos_command.py\"", line 238, in main\n module.fail_json(msg=str(exc), **exc.kwargs)\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1811, in fail_json\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in remove_values\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in \n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 386, in remove_values\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 399, in remove_values\nTypeError: Value of unknown type: , delete bootflash:rr1-n35-r10-x32sp-2a.conf \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @/home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/nexusbaseconfig.retry PLAY RECAP ********************************************************************* rr1-n35-r09-x32sp-1a : ok=16 changed=15 unreachable=0 failed=1 rr1-n35-r10-x32sp-2a : ok=16 changed=15 unreachable=0 failed=1 ``` ",True,"nxos_command fails with cisco nexus version 7.x - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Cisco Nexus Software BIOS: version 3.9.0 NXOS: version 7.0(3)I4(1) BIOS compile time: 01/25/2016 NXOS image file is: bootflash:///nxos.7.0.3.I4.1.bin NXOS compile time: 5/15/2016 20:00:00 [05/15/2016 20:24:30] Hardware cisco Nexus3000 C3132Q Chassis Intel(R) Core(TM) i3-3227U CPU @ 2.50GHz with 8153476 kB of memory. Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY issue command to delete file from boothflash. This same command with fine on Cisoc Nexus U6.6 ##### STEPS TO REPRODUCE ``` --- - name: Delete remote config nxos_command: provider: ""{{ cli }}"" host: ""{{ ansible_host }}"" commands: - ""delete bootflash:{{ inventory_hostname }}.conf "" ``` ##### EXPECTED RESULTS delete file from bootflash ##### ACTUAL RESULTS ``` task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/roles/copyfirmware/tasks/main.yml:2 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.10.229.140> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.229.140> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641 `"" && echo ansible-tmp-1478227857.81-191488415490641=""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641 `"" ) && sleep 0' Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_command.py <10.10.230.12> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.230.12> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391 `"" && echo ansible-tmp-1478227857.81-248162808087391=""` echo $HOME/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391 `"" ) && sleep 0' <10.10.230.12> PUT /tmp/tmpRHdIQN TO /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/nxos_command.py <10.10.229.140> PUT /tmp/tmpIoWDhn TO /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/nxos_command.py <10.10.229.140> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/ /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/nxos_command.py && sleep 0' <10.10.230.12> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/ /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/nxos_command.py && sleep 0' <10.10.229.140> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/nxos_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-191488415490641/"" > /dev/null 2>&1 && sleep 0' <10.10.230.12> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/nxos_command.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478227857.81-248162808087391/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_o26AcW/ansible_module_nxos_command.py"", line 257, in main() File ""/tmp/ansible_o26AcW/ansible_module_nxos_command.py"", line 238, in main module.fail_json(msg=str(exc), **exc.kwargs) File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 1811, in fail_json File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in remove_values File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 386, in remove_values File ""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py"", line 399, in remove_values TypeError: Value of unknown type: , delete bootflash:rr1-n35-r09-x32sp-1a.conf fatal: [rr1-n35-r09-x32sp-1a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_o26AcW/ansible_module_nxos_command.py\"", line 257, in \n main()\n File \""/tmp/ansible_o26AcW/ansible_module_nxos_command.py\"", line 238, in main\n module.fail_json(msg=str(exc), **exc.kwargs)\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1811, in fail_json\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in remove_values\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in \n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 386, in remove_values\n File \""/tmp/ansible_o26AcW/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 399, in remove_values\nTypeError: Value of unknown type: , delete bootflash:rr1-n35-r09-x32sp-1a.conf \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_t5uacC/ansible_module_nxos_command.py"", line 257, in main() File ""/tmp/ansible_t5uacC/ansible_module_nxos_command.py"", line 238, in main module.fail_json(msg=str(exc), **exc.kwargs) File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 1811, in fail_json File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in remove_values File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 388, in File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 386, in remove_values File ""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py"", line 399, in remove_values TypeError: Value of unknown type: , delete bootflash:rr1-n35-r10-x32sp-2a.conf fatal: [rr1-n35-r10-x32sp-2a]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_command"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_t5uacC/ansible_module_nxos_command.py\"", line 257, in \n main()\n File \""/tmp/ansible_t5uacC/ansible_module_nxos_command.py\"", line 238, in main\n module.fail_json(msg=str(exc), **exc.kwargs)\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1811, in fail_json\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in remove_values\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in \n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 386, in remove_values\n File \""/tmp/ansible_t5uacC/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 399, in remove_values\nTypeError: Value of unknown type: , delete bootflash:rr1-n35-r10-x32sp-2a.conf \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @/home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/nexusbaseconfig.retry PLAY RECAP ********************************************************************* rr1-n35-r09-x32sp-1a : ok=16 changed=15 unreachable=0 failed=1 rr1-n35-r10-x32sp-2a : ok=16 changed=15 unreachable=0 failed=1 ``` ",1,nxos command fails with cisco nexus version x issue type bug report component name ansible version ansible config file home emarq solutions network automation mas ansible cisco nexus ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific cisco nexus software bios version nxos version bios compile time nxos image file is bootflash nxos bin nxos compile time hardware cisco chassis intel r core tm cpu with kb of memory linux generic ubuntu smp wed oct utc gnu linux summary issue command to delete file from boothflash this same command with fine on cisoc nexus steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name delete remote config nxos command provider cli host ansible host commands delete bootflash inventory hostname conf expected results delete file from bootflash actual results task path home emarq solutions network automation mas ansible cisco nexus roles copyfirmware tasks main yml using module file usr lib dist packages ansible modules core network nxos nxos command py establish local connection for user emarq exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep using module file usr lib dist packages ansible modules core network nxos nxos command py establish local connection for user emarq exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmprhdiqn to home emarq ansible tmp ansible tmp nxos command py put tmp tmpiowdhn to home emarq ansible tmp ansible tmp nxos command py exec bin sh c chmod u x home emarq ansible tmp ansible tmp home emarq ansible tmp ansible tmp nxos command py sleep exec bin sh c chmod u x home emarq ansible tmp ansible tmp home emarq ansible tmp ansible tmp nxos command py sleep exec bin sh c usr bin python home emarq ansible tmp ansible tmp nxos command py rm rf home emarq ansible tmp ansible tmp dev null sleep exec bin sh c usr bin python home emarq ansible tmp ansible tmp nxos command py rm rf home emarq ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module nxos command py line in main file tmp ansible ansible module nxos command py line in main module fail json msg str exc exc kwargs file tmp ansible ansible modlib zip ansible module utils basic py line in fail json file tmp ansible ansible modlib zip ansible module utils basic py line in remove values file tmp ansible ansible modlib zip ansible module utils basic py line in file tmp ansible ansible modlib zip ansible module utils basic py line in remove values file tmp ansible ansible modlib zip ansible module utils basic py line in remove values typeerror value of unknown type delete bootflash conf fatal failed changed false failed true invocation module name nxos command module stderr traceback most recent call last n file tmp ansible ansible module nxos command py line in n main n file tmp ansible ansible module nxos command py line in main n module fail json msg str exc exc kwargs n file tmp ansible ansible modlib zip ansible module utils basic py line in fail json n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values n file tmp ansible ansible modlib zip ansible module utils basic py line in n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values ntypeerror value of unknown type delete bootflash conf n module stdout msg module failure an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module nxos command py line in main file tmp ansible ansible module nxos command py line in main module fail json msg str exc exc kwargs file tmp ansible ansible modlib zip ansible module utils basic py line in fail json file tmp ansible ansible modlib zip ansible module utils basic py line in remove values file tmp ansible ansible modlib zip ansible module utils basic py line in file tmp ansible ansible modlib zip ansible module utils basic py line in remove values file tmp ansible ansible modlib zip ansible module utils basic py line in remove values typeerror value of unknown type delete bootflash conf fatal failed changed false failed true invocation module name nxos command module stderr traceback most recent call last n file tmp ansible ansible module nxos command py line in n main n file tmp ansible ansible module nxos command py line in main n module fail json msg str exc exc kwargs n file tmp ansible ansible modlib zip ansible module utils basic py line in fail json n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values n file tmp ansible ansible modlib zip ansible module utils basic py line in n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values n file tmp ansible ansible modlib zip ansible module utils basic py line in remove values ntypeerror value of unknown type delete bootflash conf n module stdout msg module failure to retry use limit home emarq solutions network automation mas ansible cisco nexus nexusbaseconfig retry play recap ok changed unreachable failed ok changed unreachable failed ,1 817,4441895460.0,IssuesEvent,2016-08-19 11:13:22,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive failed to unpack tar files,bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT management node: centos 6.5 remote node: centos 6.5 ##### SUMMARY unarchive module shows error when it handle gzip, bzip2 and xz compressed as well as uncompressed tar files using Ansible stable 2.1.0.0 version. ##### STEPS TO REPRODUCE unarchive: src: /usr/local/src/example.tar.gz dest: /usr/local/src creates: /usr/local/src/example/Makefile copy: no ``` ``` ##### EXPECTED RESULTS changed: [xxxxx] ##### ACTUAL RESULTS fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] 没有那个文件或目录: '/usr/local/src/test.txt'""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry ``` ``` ",True,"unarchive failed to unpack tar files - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT management node: centos 6.5 remote node: centos 6.5 ##### SUMMARY unarchive module shows error when it handle gzip, bzip2 and xz compressed as well as uncompressed tar files using Ansible stable 2.1.0.0 version. ##### STEPS TO REPRODUCE unarchive: src: /usr/local/src/example.tar.gz dest: /usr/local/src creates: /usr/local/src/example/Makefile copy: no ``` ``` ##### EXPECTED RESULTS changed: [xxxxx] ##### ACTUAL RESULTS fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] 没有那个文件或目录: '/usr/local/src/test.txt'""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry ``` ``` ",1,unarchive failed to unpack tar files issue type bug report component name unarchive ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific management node centos remote node centos summary unarchive module shows error when it handle gzip and xz compressed as well as uncompressed tar files using ansible stable version steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used unarchive src usr local src example tar gz dest usr local src creates usr local src example makefile copy no expected results changed actual results fatal failed changed false failed true msg unexpected error when accessing exploded file 没有那个文件或目录 usr local src test txt no more hosts left to retry use limit test retry ,1 1588,6572366311.0,IssuesEvent,2017-09-11 01:45:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,postgresql_db should support dump and import states like mysql_db does,affects_1.9 feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Ansible Version: ``` ansible 1.9.1 configured module search path = None ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: The postgresql_db module should support the dump state like the mysql_db module does. ##### Steps To Reproduce: Example playbook for dump: ``` --- - hosts: postgres-server tasks: - name: backup db server postgresql_db: name={{item}} state=dump target=/{{item}}-{{ansible_date_time.iso8601}}.sql with_items: db_name ``` Example playbook for import: ``` --- - hosts: postgres-server tasks: - name: Restore db server postgresql_db: name=example state=import target=/example.sql ``` ##### Expected Results: That a database would be dumped or imported to/from a file named in the target parameter. ##### Actual Results: ``` TASK: [backup db server] ****************************************************** failed: [xxx.xxx.xxx.xxx] => (item=db_name) => {""failed"": true, ""item"": ""db_name""} msg: unsupported parameter for module: target FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** ``` ",True,"postgresql_db should support dump and import states like mysql_db does - ##### Issue Type: - Feature Idea ##### Ansible Version: ``` ansible 1.9.1 configured module search path = None ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: The postgresql_db module should support the dump state like the mysql_db module does. ##### Steps To Reproduce: Example playbook for dump: ``` --- - hosts: postgres-server tasks: - name: backup db server postgresql_db: name={{item}} state=dump target=/{{item}}-{{ansible_date_time.iso8601}}.sql with_items: db_name ``` Example playbook for import: ``` --- - hosts: postgres-server tasks: - name: Restore db server postgresql_db: name=example state=import target=/example.sql ``` ##### Expected Results: That a database would be dumped or imported to/from a file named in the target parameter. ##### Actual Results: ``` TASK: [backup db server] ****************************************************** failed: [xxx.xxx.xxx.xxx] => (item=db_name) => {""failed"": true, ""item"": ""db_name""} msg: unsupported parameter for module: target FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** ``` ",1,postgresql db should support dump and import states like mysql db does issue type feature idea ansible version ansible configured module search path none ansible configuration n a environment n a summary the postgresql db module should support the dump state like the mysql db module does steps to reproduce example playbook for dump hosts postgres server tasks name backup db server postgresql db name item state dump target item ansible date time sql with items db name example playbook for import hosts postgres server tasks name restore db server postgresql db name example state import target example sql expected results that a database would be dumped or imported to from a file named in the target parameter actual results task failed item db name failed true item db name msg unsupported parameter for module target fatal all hosts have already failed aborting play recap ,1 1181,5097443189.0,IssuesEvent,2017-01-03 21:30:31,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,postgres_user module: `role_attr_flags` does nothing when combined with `no_password_changes`,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME postgresql_user ##### ANSIBLE VERSION ``` Ansible 2.1.0.0 ``` ##### OS / ENVIRONMENT Mac OS X El Capitan, Version 10.11.4 Tested on Docker official postgres container and on Amazon RDS PostgreSQL DB instance ##### SUMMARY When running a task with postgres_user module, the option `role_attr_flags` does not set any role attributes when `no_password_change` is set to `yes` ##### STEPS TO REPRODUCE Run a task with module postgres_user, set any attribute in `role_attr_flags` and the option `no_password_changes` to `yes` ``` --- - hosts: localhost tasks: - name: create user postgresql_user: name: testing_user password: somerandompassword state: present login_host: your.amazon.url.to.postges.instance login_user: yourdefaultpostgresuser login_password: yoursecretpasswordforthedefaultuser - name: add attributes to user postgresql_user: name: testing_user no_password_changes: yes role_attr_flags: CREATEDB login_host: your.amazon.url.to.postges.instance login_user: yourdefaultpostgresuser login_password: yoursecretpasswordforthedefaultuser ``` ##### EXPECTED RESULTS The user testing_user should have the CreateDB attribute ##### ACTUAL RESULTS No changes made. ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [create user] ************************************************************* changed: [localhost] => {""changed"": true, ""user"": ""test_user""} TASK [add attributes to user] ************************************************** ok: [localhost] => {""changed"": false, ""user"": ""test_user""} PLAY RECAP ********************************************************************* localhost : ok=3 changed=1 unreachable=0 failed=0 ``` ",True,"postgres_user module: `role_attr_flags` does nothing when combined with `no_password_changes` - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME postgresql_user ##### ANSIBLE VERSION ``` Ansible 2.1.0.0 ``` ##### OS / ENVIRONMENT Mac OS X El Capitan, Version 10.11.4 Tested on Docker official postgres container and on Amazon RDS PostgreSQL DB instance ##### SUMMARY When running a task with postgres_user module, the option `role_attr_flags` does not set any role attributes when `no_password_change` is set to `yes` ##### STEPS TO REPRODUCE Run a task with module postgres_user, set any attribute in `role_attr_flags` and the option `no_password_changes` to `yes` ``` --- - hosts: localhost tasks: - name: create user postgresql_user: name: testing_user password: somerandompassword state: present login_host: your.amazon.url.to.postges.instance login_user: yourdefaultpostgresuser login_password: yoursecretpasswordforthedefaultuser - name: add attributes to user postgresql_user: name: testing_user no_password_changes: yes role_attr_flags: CREATEDB login_host: your.amazon.url.to.postges.instance login_user: yourdefaultpostgresuser login_password: yoursecretpasswordforthedefaultuser ``` ##### EXPECTED RESULTS The user testing_user should have the CreateDB attribute ##### ACTUAL RESULTS No changes made. ``` PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [create user] ************************************************************* changed: [localhost] => {""changed"": true, ""user"": ""test_user""} TASK [add attributes to user] ************************************************** ok: [localhost] => {""changed"": false, ""user"": ""test_user""} PLAY RECAP ********************************************************************* localhost : ok=3 changed=1 unreachable=0 failed=0 ``` ",1,postgres user module role attr flags does nothing when combined with no password changes issue type bug report component name postgresql user ansible version ansible os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific mac os x el capitan version tested on docker official postgres container and on amazon rds postgresql db instance summary when running a task with postgres user module the option role attr flags does not set any role attributes when no password change is set to yes steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run a task with module postgres user set any attribute in role attr flags and the option no password changes to yes hosts localhost tasks name create user postgresql user name testing user password somerandompassword state present login host your amazon url to postges instance login user yourdefaultpostgresuser login password yoursecretpasswordforthedefaultuser name add attributes to user postgresql user name testing user no password changes yes role attr flags createdb login host your amazon url to postges instance login user yourdefaultpostgresuser login password yoursecretpasswordforthedefaultuser expected results the user testing user should have the createdb attribute actual results no changes made play task ok task changed changed true user test user task ok changed false user test user play recap localhost ok changed unreachable failed ,1 1760,6574997582.0,IssuesEvent,2017-09-11 14:43:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_ami: AttributeError: 'BlockDeviceType' object has no attribute 'encrypted',affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT doesn't work from: Ubuntu 14.04, with python2.7-boto 2.20.1-2ubuntu2 works from: Ubuntu 16.04, with 2.38.0-1ubuntu1 to: Ubuntu 16.04 on AWS ##### SUMMARY ec2_ami doesn't work on Ubuntu 14.04, works fine on 16.04 I suspect python-boto might be the problem. 14.04 uses 2.20.1-2ubuntu2, 16.04 uses 2.38.0-1ubuntu1 ##### STEPS TO REPRODUCE ``` - ec2_ami: instance_id: ""{{ awsInstanceId }}"" region: ""{{ awsRegion }}"" ec2_access_key: ""{{ hostvars[apiHost]['ec2_access_key'] }}"" ec2_secret_key: ""{{ hostvars[apiHost]['ec2_secret_key'] }}"" wait: true name: ""{{gitsha}}-{{templateName}}"" wait_timeout: 3600 register: ami ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` fatal: [production-worker-template.clara.io]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 560, in \n main()\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 552, in main\n create_image(module, ec2)\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 419, in create_image\n module.exit_json(msg=\""AMI creation operation complete\"", changed=True, **get_ami_info(img))\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 331, in get_ami_info\n block_device_mapping=get_block_device_mapping(image),\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 318, in get_block_device_mapping\n 'encrypted': bdm[device_name].encrypted,\nAttributeError: 'BlockDeviceType' object has no attribute 'encrypted'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",True,"ec2_ami: AttributeError: 'BlockDeviceType' object has no attribute 'encrypted' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT doesn't work from: Ubuntu 14.04, with python2.7-boto 2.20.1-2ubuntu2 works from: Ubuntu 16.04, with 2.38.0-1ubuntu1 to: Ubuntu 16.04 on AWS ##### SUMMARY ec2_ami doesn't work on Ubuntu 14.04, works fine on 16.04 I suspect python-boto might be the problem. 14.04 uses 2.20.1-2ubuntu2, 16.04 uses 2.38.0-1ubuntu1 ##### STEPS TO REPRODUCE ``` - ec2_ami: instance_id: ""{{ awsInstanceId }}"" region: ""{{ awsRegion }}"" ec2_access_key: ""{{ hostvars[apiHost]['ec2_access_key'] }}"" ec2_secret_key: ""{{ hostvars[apiHost]['ec2_secret_key'] }}"" wait: true name: ""{{gitsha}}-{{templateName}}"" wait_timeout: 3600 register: ami ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` fatal: [production-worker-template.clara.io]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 560, in \n main()\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 552, in main\n create_image(module, ec2)\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 419, in create_image\n module.exit_json(msg=\""AMI creation operation complete\"", changed=True, **get_ami_info(img))\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 331, in get_ami_info\n block_device_mapping=get_block_device_mapping(image),\n File \""/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\"", line 318, in get_block_device_mapping\n 'encrypted': bdm[device_name].encrypted,\nAttributeError: 'BlockDeviceType' object has no attribute 'encrypted'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",1, ami attributeerror blockdevicetype object has no attribute encrypted issue type bug report component name ami ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific doesn t work from ubuntu with boto works from ubuntu with to ubuntu on aws summary ami doesn t work on ubuntu works fine on i suspect python boto might be the problem uses uses steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ami instance id awsinstanceid region awsregion access key hostvars secret key hostvars wait true name gitsha templatename wait timeout register ami expected results actual results fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible mk ansible module ami py line in n main n file tmp ansible mk ansible module ami py line in main n create image module n file tmp ansible mk ansible module ami py line in create image n module exit json msg ami creation operation complete changed true get ami info img n file tmp ansible mk ansible module ami py line in get ami info n block device mapping get block device mapping image n file tmp ansible mk ansible module ami py line in get block device mapping n encrypted bdm encrypted nattributeerror blockdevicetype object has no attribute encrypted n module stdout msg module failure ,1 1095,4958184021.0,IssuesEvent,2016-12-02 08:49:49,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cron module failing if entry already exists (on RHEL5),affects_2.3 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cron module ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel bb41a005b3) last updated 2016/11/17 15:58:57 (GMT +800) lib/ansible/modules/core: (detached HEAD a4cddac368) last updated 2016/11/17 15:59:31 (GMT +800) lib/ansible/modules/extras: (detached HEAD 19c7d5b31b) last updated 2016/11/17 16:00:00 (GMT +800) config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 6.4 (Santiago) ##### SUMMARY cron module seems to fail if entry already exists ##### STEPS TO REPRODUCE ``` [ansible@vmhklftpscdv1 ~]$ cat cron.yml - hosts: ""{{hosts}}"" tasks: - name: test cron cron: name=""Test Cron Module"" minute=""0"" hour=""*"" job=""cron.sh echo x "" [ansible@vmhklftpscdv1 ~]$ ansible-playbook cron.yml -e ""hosts=vmfurlaposdbdv1"" PLAY [vmfurlaposdbdv1] ************************************************************************************************************************* TASK [Gathering Facts] ************************************************************************************************************************* ok: [vmfurlaposdbdv1] TASK [test cron] ******************************************************************************************************************************* changed: [vmfurlaposdbdv1] PLAY RECAP ************************************************************************************************************************************* vmfurlaposdbdv1 : ok=2 changed=1 unreachable=0 failed=0 [ansible@vmhklftpscdv1 ~]$ ansible-playbook cron.yml -e ""hosts=vmfurlaposdbdv1"" PLAY [vmfurlaposdbdv1] ************************************************************************************************************************* TASK [Gathering Facts] ************************************************************************************************************************* ok: [vmfurlaposdbdv1] TASK [test cron] ******************************************************************************************************************************* fatal: [vmfurlaposdbdv1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_ERoh58/ansible_module_cron.py\"", line 736, in ?\r\n main()\r\n File \""/tmp/ansible_ERoh58/ansible_module_cron.py\"", line 693, in main\r\n if not changed and not crontab.existing.endswith(('\\r', '\\n')):\r\nTypeError: expected a character buffer object\r\n"", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/home/ansible/cron.retry PLAY RECAP ************************************************************************************************************************************* vmfurlaposdbdv1 : ok=1 changed=0 unreachable=0 failed=1 ``` On target machine you can see cron entry was created fine: ``` [oracle@vmfurlaposdbdv1 ~]$ crontab -l PATH=/usr/local/bin #Ansible: Test Cron Module 0 * * * * cron.sh echo x ``` ##### EXPECTED RESULTS Should do nothing as entry already exists. ##### ACTUAL RESULTS Throws exception Note seems related to managed host. Host is running RHEL5.8. Seems to run find on other managed hosts (most are running RHEL6) ",True,"cron module failing if entry already exists (on RHEL5) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cron module ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel bb41a005b3) last updated 2016/11/17 15:58:57 (GMT +800) lib/ansible/modules/core: (detached HEAD a4cddac368) last updated 2016/11/17 15:59:31 (GMT +800) lib/ansible/modules/extras: (detached HEAD 19c7d5b31b) last updated 2016/11/17 16:00:00 (GMT +800) config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 6.4 (Santiago) ##### SUMMARY cron module seems to fail if entry already exists ##### STEPS TO REPRODUCE ``` [ansible@vmhklftpscdv1 ~]$ cat cron.yml - hosts: ""{{hosts}}"" tasks: - name: test cron cron: name=""Test Cron Module"" minute=""0"" hour=""*"" job=""cron.sh echo x "" [ansible@vmhklftpscdv1 ~]$ ansible-playbook cron.yml -e ""hosts=vmfurlaposdbdv1"" PLAY [vmfurlaposdbdv1] ************************************************************************************************************************* TASK [Gathering Facts] ************************************************************************************************************************* ok: [vmfurlaposdbdv1] TASK [test cron] ******************************************************************************************************************************* changed: [vmfurlaposdbdv1] PLAY RECAP ************************************************************************************************************************************* vmfurlaposdbdv1 : ok=2 changed=1 unreachable=0 failed=0 [ansible@vmhklftpscdv1 ~]$ ansible-playbook cron.yml -e ""hosts=vmfurlaposdbdv1"" PLAY [vmfurlaposdbdv1] ************************************************************************************************************************* TASK [Gathering Facts] ************************************************************************************************************************* ok: [vmfurlaposdbdv1] TASK [test cron] ******************************************************************************************************************************* fatal: [vmfurlaposdbdv1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_ERoh58/ansible_module_cron.py\"", line 736, in ?\r\n main()\r\n File \""/tmp/ansible_ERoh58/ansible_module_cron.py\"", line 693, in main\r\n if not changed and not crontab.existing.endswith(('\\r', '\\n')):\r\nTypeError: expected a character buffer object\r\n"", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/home/ansible/cron.retry PLAY RECAP ************************************************************************************************************************************* vmfurlaposdbdv1 : ok=1 changed=0 unreachable=0 failed=1 ``` On target machine you can see cron entry was created fine: ``` [oracle@vmfurlaposdbdv1 ~]$ crontab -l PATH=/usr/local/bin #Ansible: Test Cron Module 0 * * * * cron.sh echo x ``` ##### EXPECTED RESULTS Should do nothing as entry already exists. ##### ACTUAL RESULTS Throws exception Note seems related to managed host. Host is running RHEL5.8. Seems to run find on other managed hosts (most are running RHEL6) ",1,cron module failing if entry already exists on issue type bug report component name cron module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides os environment red hat enterprise linux server release santiago summary cron module seems to fail if entry already exists steps to reproduce cat cron yml hosts hosts tasks name test cron cron name test cron module minute hour job cron sh echo x ansible playbook cron yml e hosts play task ok task changed play recap ok changed unreachable failed ansible playbook cron yml e hosts play task ok task fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file tmp ansible ansible module cron py line in r n main r n file tmp ansible ansible module cron py line in main r n if not changed and not crontab existing endswith r n r ntypeerror expected a character buffer object r n msg module failure to retry use limit home ansible cron retry play recap ok changed unreachable failed on target machine you can see cron entry was created fine crontab l path usr local bin ansible test cron module cron sh echo x expected results should do nothing as entry already exists actual results throws exception note seems related to managed host host is running seems to run find on other managed hosts most are running ,1 1861,6577413389.0,IssuesEvent,2017-09-12 00:44:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,command module does not return stderr of `postfix check`,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME command ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /home/jooadam/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [ssh_connection] ssh_args = '' ``` ##### OS / ENVIRONMENT Ubuntu 12.04 / Ubuntu 15.10 ##### SUMMARY Running `postfix check` using the command module on error the stderr attribute is empty. ##### STEPS TO REPRODUCE ``` # /etc/postfix/main.cf jakjsdkjfksdfjskdjfkdj ``` ``` { ""name"": ""validate postfix configuration"", ""action"": ""command postfix check"" } ``` ##### EXPECTED RESULTS ``` fatal: [192.0.2.0]: FAILED! => {""changed"": true, ""cmd"": [""postfix"", ""check""], ""delta"": ""0:00:01.004179"", ""end"": ""2016-04-18 21:01:41.040310"", ""failed"": true, ""rc"": 1, ""start"": ""2016-04-18 21:01:40.036131"", ""stderr"": ""postfix: fatal: /etc/postfix/main.cf, line 1: missing '=' after attribute name: \""jakjsdkjfksdfjskdjfkdj\"""", ""stdout"": """", ""stdout_lines"": [], ""warnings"": []} ``` ##### ACTUAL RESULTS ``` fatal: [192.0.2.0]: FAILED! => {""changed"": true, ""cmd"": [""postfix"", ""check""], ""delta"": ""0:00:01.004179"", ""end"": ""2016-04-18 21:01:41.040310"", ""failed"": true, ""rc"": 1, ""start"": ""2016-04-18 21:01:40.036131"", ""stderr"": """", ""stdout"": """", ""stdout_lines"": [], ""warnings"": []} ``` ",True,"command module does not return stderr of `postfix check` - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME command ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /home/jooadam/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [ssh_connection] ssh_args = '' ``` ##### OS / ENVIRONMENT Ubuntu 12.04 / Ubuntu 15.10 ##### SUMMARY Running `postfix check` using the command module on error the stderr attribute is empty. ##### STEPS TO REPRODUCE ``` # /etc/postfix/main.cf jakjsdkjfksdfjskdjfkdj ``` ``` { ""name"": ""validate postfix configuration"", ""action"": ""command postfix check"" } ``` ##### EXPECTED RESULTS ``` fatal: [192.0.2.0]: FAILED! => {""changed"": true, ""cmd"": [""postfix"", ""check""], ""delta"": ""0:00:01.004179"", ""end"": ""2016-04-18 21:01:41.040310"", ""failed"": true, ""rc"": 1, ""start"": ""2016-04-18 21:01:40.036131"", ""stderr"": ""postfix: fatal: /etc/postfix/main.cf, line 1: missing '=' after attribute name: \""jakjsdkjfksdfjskdjfkdj\"""", ""stdout"": """", ""stdout_lines"": [], ""warnings"": []} ``` ##### ACTUAL RESULTS ``` fatal: [192.0.2.0]: FAILED! => {""changed"": true, ""cmd"": [""postfix"", ""check""], ""delta"": ""0:00:01.004179"", ""end"": ""2016-04-18 21:01:41.040310"", ""failed"": true, ""rc"": 1, ""start"": ""2016-04-18 21:01:40.036131"", ""stderr"": """", ""stdout"": """", ""stdout_lines"": [], ""warnings"": []} ``` ",1,command module does not return stderr of postfix check issue type bug report component name command ansible version ansible config file home jooadam ansible cfg configured module search path default w o overrides configuration ssh args os environment ubuntu ubuntu summary running postfix check using the command module on error the stderr attribute is empty steps to reproduce etc postfix main cf jakjsdkjfksdfjskdjfkdj name validate postfix configuration action command postfix check expected results fatal failed changed true cmd delta end failed true rc start stderr postfix fatal etc postfix main cf line missing after attribute name jakjsdkjfksdfjskdjfkdj stdout stdout lines warnings actual results fatal failed changed true cmd delta end failed true rc start stderr stdout stdout lines warnings ,1 955,4699765651.0,IssuesEvent,2016-10-12 16:34:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,rpm_key is not accepting a list of key-urls,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME rpm_key ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION nothing changed in /etc/ansible.cfg ##### OS / ENVIRONMENT ubuntu 16.04 ##### SUMMARY Inside a role i got: ``` … - name: Install gpg keys for HP repo rpm_key: state=present key={{ item }} with_items: - ""{{hprepokeys}}"" … ``` and in my vars.yml i got ``` hprepokeys: - 'http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub' - 'http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub' - 'http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub' - 'http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub' ``` Running the playbook is telling ``` TASK [common : Install gpg keys for HP repo] *********************************** fatal: [192.168.0.168]: FAILED! => {""failed"": true, ""msg"": ""the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'hprepokeys' is undefined\n\nThe error appears to have been in '/foobar/roles/common/tasks/main.yml': line 39, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Install gpg keys for HP repo\n ^ here\n""} ``` ##### STEPS TO REPRODUCE Have a look at the SUMMARY ##### EXPECTED RESULTS rpm_key is handling the list of vars ##### ACTUAL RESULTS A fatal error, see SUMMARY ",True,"rpm_key is not accepting a list of key-urls - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME rpm_key ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION nothing changed in /etc/ansible.cfg ##### OS / ENVIRONMENT ubuntu 16.04 ##### SUMMARY Inside a role i got: ``` … - name: Install gpg keys for HP repo rpm_key: state=present key={{ item }} with_items: - ""{{hprepokeys}}"" … ``` and in my vars.yml i got ``` hprepokeys: - 'http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub' - 'http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub' - 'http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub' - 'http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub' ``` Running the playbook is telling ``` TASK [common : Install gpg keys for HP repo] *********************************** fatal: [192.168.0.168]: FAILED! => {""failed"": true, ""msg"": ""the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'hprepokeys' is undefined\n\nThe error appears to have been in '/foobar/roles/common/tasks/main.yml': line 39, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Install gpg keys for HP repo\n ^ here\n""} ``` ##### STEPS TO REPRODUCE Have a look at the SUMMARY ##### EXPECTED RESULTS rpm_key is handling the list of vars ##### ACTUAL RESULTS A fatal error, see SUMMARY ",1,rpm key is not accepting a list of key urls issue type bug report component name rpm key ansible version ansible configuration nothing changed in etc ansible cfg os environment ubuntu summary inside a role i got … name install gpg keys for hp repo rpm key state present key item with items hprepokeys … and in my vars yml i got hprepokeys running the playbook is telling task fatal failed failed true msg the field args has an invalid value which appears to include a variable that is undefined the error was hprepokeys is undefined n nthe error appears to have been in foobar roles common tasks main yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n n name install gpg keys for hp repo n here n steps to reproduce have a look at the summary expected results rpm key is handling the list of vars actual results a fatal error see summary ,1 1782,6575831237.0,IssuesEvent,2017-09-11 17:29:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,rpm_key key_id verification erroneous ,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/packaging/os/rpm_key.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION not relevant ##### OS / ENVIRONMENT N/A Redhat/Centos rpm-based ##### SUMMARY Function is_key_imported does not work reliably because Function getkeyid only extracts the last part of the first signature packet, therefore the key will be imported on every ansible run ##### STEPS TO REPRODUCE Install the Mono gpg key (from the Ubuntu keyserver, as per official docu): ``` http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF ``` Short Summary: `getkeyid()` essentially runs `gpg --no-tty --batch --with-colons --fixed-list-mode --list-packets /tmp/key.1 |grep signature` And returns the last 8 characters of the first key id it finds. Then `is_key_imported()` runs `rpm -qa gpg-pubkey` and matches the keyid against the first key part of the filename: gpg-pubkey-**d3d831ef**-53dfa827 This does'nt work with the Mono key because `gpg --no-tty --batch --with-colons --fixed-list-mode --list-packets /tmp/key.1 |grep signature` returns: ``` :signature packet: algo 1, keyid C90F9CB90E1FAD0C :signature packet: algo 1, keyid A6A19B38D3D831EF :signature packet: algo 1, keyid A6A19B38D3D831EF ``` and so the last 8 characters of the second keyid match the first 8 character set of the rpm key information Perhaps verifying the Fingerprint of the keys, or verify the full key ids(patch following): `````` --- rpm_key.py 2016-09-30 13:38:31.000000000 +0200 +++ new_rpm_key.py 2016-09-30 14:49:18.000000000 +0200 @@ -149,33 +149,27 @@ stdout, stderr = self.execute_command([gpg, '--no-tty', '--batch', '--with-colons', '--fixed-list-mode', '--list-packets', keyfile]) for line in stdout.splitlines(): line = line.strip() - if line.startswith(':signature packet:'): - # We want just the last 8 characters of the keyid - keyid = line.split()[-1].strip()[8:] + if ""keyid:"" in line: + keyid = line.split()[-1] return keyid self.json_fail(msg=""Unexpected gpg output"") def is_keyid(self, keystr): """"""Verifies if a key, as provided by the user is a keyid"""""" - return re.match('(0x)?[0-9a-f]{8}', keystr, flags=re.IGNORECASE) + return re.match('(0x)?[0-9a-f]{16}', keystr, flags=re.IGNORECASE) def execute_command(self, cmd): - rc, stdout, stderr = self.module.run_command(cmd) + rc, stdout, stderr = self.module.run_command(cmd,use_unsafe_shell=True) if rc != 0: self.module.fail_json(msg=stderr) return stdout, stderr def is_key_imported(self, keyid): - stdout, stderr = self.execute_command([self.rpm, '-qa', 'gpg-pubkey']) + cmd=self.rpm+' -q gpg-pubkey --qf ""%{description}"" | gpg --no-tty --batch --with-colons --fixed-list-mode --list-packets -' + stdout, stderr = self.execute_command(cmd) for line in stdout.splitlines(): - line = line.strip() - if not line: - continue - match = re.match('gpg-pubkey-([0-9a-f]+)-([0-9a-f]+)', line) - if not match: - self.module.fail_json(msg=""rpm returned unexpected output [%s]"" % line) - else: - if keyid == match.group(1): + if ""keyid: "" in line: + if keyid.upper() == line.split()[-1].upper(): return True return False``` `````` ",True,"rpm_key key_id verification erroneous - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/packaging/os/rpm_key.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION not relevant ##### OS / ENVIRONMENT N/A Redhat/Centos rpm-based ##### SUMMARY Function is_key_imported does not work reliably because Function getkeyid only extracts the last part of the first signature packet, therefore the key will be imported on every ansible run ##### STEPS TO REPRODUCE Install the Mono gpg key (from the Ubuntu keyserver, as per official docu): ``` http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF ``` Short Summary: `getkeyid()` essentially runs `gpg --no-tty --batch --with-colons --fixed-list-mode --list-packets /tmp/key.1 |grep signature` And returns the last 8 characters of the first key id it finds. Then `is_key_imported()` runs `rpm -qa gpg-pubkey` and matches the keyid against the first key part of the filename: gpg-pubkey-**d3d831ef**-53dfa827 This does'nt work with the Mono key because `gpg --no-tty --batch --with-colons --fixed-list-mode --list-packets /tmp/key.1 |grep signature` returns: ``` :signature packet: algo 1, keyid C90F9CB90E1FAD0C :signature packet: algo 1, keyid A6A19B38D3D831EF :signature packet: algo 1, keyid A6A19B38D3D831EF ``` and so the last 8 characters of the second keyid match the first 8 character set of the rpm key information Perhaps verifying the Fingerprint of the keys, or verify the full key ids(patch following): `````` --- rpm_key.py 2016-09-30 13:38:31.000000000 +0200 +++ new_rpm_key.py 2016-09-30 14:49:18.000000000 +0200 @@ -149,33 +149,27 @@ stdout, stderr = self.execute_command([gpg, '--no-tty', '--batch', '--with-colons', '--fixed-list-mode', '--list-packets', keyfile]) for line in stdout.splitlines(): line = line.strip() - if line.startswith(':signature packet:'): - # We want just the last 8 characters of the keyid - keyid = line.split()[-1].strip()[8:] + if ""keyid:"" in line: + keyid = line.split()[-1] return keyid self.json_fail(msg=""Unexpected gpg output"") def is_keyid(self, keystr): """"""Verifies if a key, as provided by the user is a keyid"""""" - return re.match('(0x)?[0-9a-f]{8}', keystr, flags=re.IGNORECASE) + return re.match('(0x)?[0-9a-f]{16}', keystr, flags=re.IGNORECASE) def execute_command(self, cmd): - rc, stdout, stderr = self.module.run_command(cmd) + rc, stdout, stderr = self.module.run_command(cmd,use_unsafe_shell=True) if rc != 0: self.module.fail_json(msg=stderr) return stdout, stderr def is_key_imported(self, keyid): - stdout, stderr = self.execute_command([self.rpm, '-qa', 'gpg-pubkey']) + cmd=self.rpm+' -q gpg-pubkey --qf ""%{description}"" | gpg --no-tty --batch --with-colons --fixed-list-mode --list-packets -' + stdout, stderr = self.execute_command(cmd) for line in stdout.splitlines(): - line = line.strip() - if not line: - continue - match = re.match('gpg-pubkey-([0-9a-f]+)-([0-9a-f]+)', line) - if not match: - self.module.fail_json(msg=""rpm returned unexpected output [%s]"" % line) - else: - if keyid == match.group(1): + if ""keyid: "" in line: + if keyid.upper() == line.split()[-1].upper(): return True return False``` `````` ",1,rpm key key id verification erroneous issue type bug report component name ansible modules core packaging os rpm key py ansible version ansible configuration not relevant os environment n a redhat centos rpm based summary function is key imported does not work reliably because function getkeyid only extracts the last part of the first signature packet therefore the key will be imported on every ansible run steps to reproduce install the mono gpg key from the ubuntu keyserver as per official docu short summary getkeyid essentially runs gpg no tty batch with colons fixed list mode list packets tmp key grep signature and returns the last characters of the first key id it finds then is key imported runs rpm qa gpg pubkey and matches the keyid against the first key part of the filename gpg pubkey this does nt work with the mono key because gpg no tty batch with colons fixed list mode list packets tmp key grep signature returns signature packet algo keyid signature packet algo keyid signature packet algo keyid and so the last characters of the second keyid match the first character set of the rpm key information perhaps verifying the fingerprint of the keys or verify the full key ids patch following rpm key py new rpm key py stdout stderr self execute command for line in stdout splitlines line line strip if line startswith signature packet we want just the last characters of the keyid keyid line split strip if keyid in line keyid line split return keyid self json fail msg unexpected gpg output def is keyid self keystr verifies if a key as provided by the user is a keyid return re match keystr flags re ignorecase return re match keystr flags re ignorecase def execute command self cmd rc stdout stderr self module run command cmd rc stdout stderr self module run command cmd use unsafe shell true if rc self module fail json msg stderr return stdout stderr def is key imported self keyid stdout stderr self execute command cmd self rpm q gpg pubkey qf description gpg no tty batch with colons fixed list mode list packets stdout stderr self execute command cmd for line in stdout splitlines line line strip if not line continue match re match gpg pubkey line if not match self module fail json msg rpm returned unexpected output line else if keyid match group if keyid in line if keyid upper line split upper return true return false ,1 1892,6577533669.0,IssuesEvent,2017-09-12 01:34:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vol: Add support for custom KMS keys to ec2_vol,affects_2.0 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: cloud/amazon/ec2_vol.py ##### Ansible Version: ``` ansible 2.0.1.0 ``` ##### Ansible Configuration: ``` [defaults] retry_files_enabled=False host_key_checking=False pipelining=True ``` ##### Environment: ``` CentOS 7.2 python2-boto-2.39.0-1.el7.noarch.rpm (from EPEL testing) ``` ##### Summary: Currently the module `ec2_vol` (right along with most of the other modules concerned with encryption on AWS) doesn't support specifying a custom encryption key. One is only able to specify whether the volume is encrypted or not and thus the default encryption key (in this case here for EBS) is used, when encryption is enabled. So this request is about adding sort of a quick fix to the `ec2_vol` module by adding a new parameter, but I guess for the future a more global review of the topic KMS might make sense. ",True,"ec2_vol: Add support for custom KMS keys to ec2_vol - ##### Issue Type: - Feature Idea ##### Plugin Name: cloud/amazon/ec2_vol.py ##### Ansible Version: ``` ansible 2.0.1.0 ``` ##### Ansible Configuration: ``` [defaults] retry_files_enabled=False host_key_checking=False pipelining=True ``` ##### Environment: ``` CentOS 7.2 python2-boto-2.39.0-1.el7.noarch.rpm (from EPEL testing) ``` ##### Summary: Currently the module `ec2_vol` (right along with most of the other modules concerned with encryption on AWS) doesn't support specifying a custom encryption key. One is only able to specify whether the volume is encrypted or not and thus the default encryption key (in this case here for EBS) is used, when encryption is enabled. So this request is about adding sort of a quick fix to the `ec2_vol` module by adding a new parameter, but I guess for the future a more global review of the topic KMS might make sense. ",1, vol add support for custom kms keys to vol issue type feature idea plugin name cloud amazon vol py ansible version ansible ansible configuration retry files enabled false host key checking false pipelining true environment centos boto noarch rpm from epel testing summary currently the module vol right along with most of the other modules concerned with encryption on aws doesn t support specifying a custom encryption key one is only able to specify whether the volume is encrypted or not and thus the default encryption key in this case here for ebs is used when encryption is enabled so this request is about adding sort of a quick fix to the vol module by adding a new parameter but i guess for the future a more global review of the topic kms might make sense ,1 1333,5718505743.0,IssuesEvent,2017-04-19 19:42:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Incorrect handling of quotes in lineinfile module,affects_1.9 bug_report waiting_on_maintainer,"### Issue Type Bug Report ### Component Name lineinfile module ### Ansible Version 1.9.3 ### Summary I am using Ansible 1.9.3. I have following content in my `test.yml` file: ``` yaml --- - hosts: all connection: local tasks: - lineinfile: dest: ./example.txt create: yes line: ""something '\""content\""' something else"" ``` And I run it as follows: `ansible-playbook -i localhost, test.yml` ### Expected result `example.txt` contains `something '""content""' something else`. ### Actual result `example.txt` contains `something content something else`. Without any quotes. ### Investigation It seems the problem lies in the `lineinfile` module adding quotes and then executing `line = module.safe_eval(line)`. Relevant [code](https://github.com/ansible/ansible-modules-core/blob/cf88f2786822ab5f4a1cd711761a40df49bd93f0/files/lineinfile.py). After adding quotes line looks like `'something '""content""' something else'` and when passed to `module.safe_eval()` the Pythons's implicit string concatenation is applied and it looses all quotes. --- **Haven't checked if problem exists in development version, it seems module was significantly rewritten.** ",True,"Incorrect handling of quotes in lineinfile module - ### Issue Type Bug Report ### Component Name lineinfile module ### Ansible Version 1.9.3 ### Summary I am using Ansible 1.9.3. I have following content in my `test.yml` file: ``` yaml --- - hosts: all connection: local tasks: - lineinfile: dest: ./example.txt create: yes line: ""something '\""content\""' something else"" ``` And I run it as follows: `ansible-playbook -i localhost, test.yml` ### Expected result `example.txt` contains `something '""content""' something else`. ### Actual result `example.txt` contains `something content something else`. Without any quotes. ### Investigation It seems the problem lies in the `lineinfile` module adding quotes and then executing `line = module.safe_eval(line)`. Relevant [code](https://github.com/ansible/ansible-modules-core/blob/cf88f2786822ab5f4a1cd711761a40df49bd93f0/files/lineinfile.py). After adding quotes line looks like `'something '""content""' something else'` and when passed to `module.safe_eval()` the Pythons's implicit string concatenation is applied and it looses all quotes. --- **Haven't checked if problem exists in development version, it seems module was significantly rewritten.** ",1,incorrect handling of quotes in lineinfile module issue type bug report component name lineinfile module ansible version summary i am using ansible i have following content in my test yml file yaml hosts all connection local tasks lineinfile dest example txt create yes line something content something else and i run it as follows ansible playbook i localhost test yml expected result example txt contains something content something else actual result example txt contains something content something else without any quotes investigation it seems the problem lies in the lineinfile module adding quotes and then executing line module safe eval line relevant after adding quotes line looks like something content something else and when passed to module safe eval the pythons s implicit string concatenation is applied and it looses all quotes haven t checked if problem exists in development version it seems module was significantly rewritten ,1 789,4389731331.0,IssuesEvent,2016-08-08 23:20:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive doesn't extract changed tar file,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive module ansible-modules-core/files/unarchive.py ##### ANSIBLE VERSION 2.1.0.0-1ppa~trusty ##### CONFIGURATION none ##### OS / ENVIRONMENT N/A ##### SUMMARY unarchive is not updating dest (extract the tar) if tar content changed and still keep old file structures, (means if you run ""tar -d"", it reports Mod Time changed) by checking the ansible-modules-core/files/unarchive.py, it seems TgzArchive is not handling ""Mod time differs"" condition in is_unarchived method ##### STEPS TO REPRODUCE 1) prepare tar file from folder, make it demo.tar 2) run this command in playbook - unarchive: src=/opt/packages/demo.tar dest=/opt/app/ copy=no 3) update content of any file, make newer demo.tar 4) run the same playbook ##### EXPECTED RESULTS the dest folder should be updated, ##### ACTUAL RESULTS ansible ubuntu skipped it, even though the tar file is changed TASK [unarchive] *************************************************************** ok: [localhost] ",True,"unarchive doesn't extract changed tar file - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive module ansible-modules-core/files/unarchive.py ##### ANSIBLE VERSION 2.1.0.0-1ppa~trusty ##### CONFIGURATION none ##### OS / ENVIRONMENT N/A ##### SUMMARY unarchive is not updating dest (extract the tar) if tar content changed and still keep old file structures, (means if you run ""tar -d"", it reports Mod Time changed) by checking the ansible-modules-core/files/unarchive.py, it seems TgzArchive is not handling ""Mod time differs"" condition in is_unarchived method ##### STEPS TO REPRODUCE 1) prepare tar file from folder, make it demo.tar 2) run this command in playbook - unarchive: src=/opt/packages/demo.tar dest=/opt/app/ copy=no 3) update content of any file, make newer demo.tar 4) run the same playbook ##### EXPECTED RESULTS the dest folder should be updated, ##### ACTUAL RESULTS ansible ubuntu skipped it, even though the tar file is changed TASK [unarchive] *************************************************************** ok: [localhost] ",1,unarchive doesn t extract changed tar file issue type bug report component name unarchive module ansible modules core files unarchive py ansible version trusty configuration none os environment n a summary unarchive is not updating dest extract the tar if tar content changed and still keep old file structures means if you run tar d it reports mod time changed by checking the ansible modules core files unarchive py it seems tgzarchive is not handling mod time differs condition in is unarchived method steps to reproduce prepare tar file from folder make it demo tar run this command in playbook unarchive src opt packages demo tar dest opt app copy no update content of any file make newer demo tar run the same playbook expected results the dest folder should be updated actual results ansible ubuntu skipped it even though the tar file is changed task ok ,1 819,4441933248.0,IssuesEvent,2016-08-19 11:23:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,support plain gzip in unarchive,feature_idea waiting_on_maintainer,"##### ISSUE TYPE Feature Idea ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION N/A ##### SUMMARY If an administrator needs to distribute a single file in compressed form, she may use gzip but would have no reason to use tar since there are not multiple files involved. unarchive claims to support gzip archives but in reality only supports .tar.gz or .tgz archives. Either the documentation or the code needs to be updated. ",True,"support plain gzip in unarchive - ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION N/A ##### SUMMARY If an administrator needs to distribute a single file in compressed form, she may use gzip but would have no reason to use tar since there are not multiple files involved. unarchive claims to support gzip archives but in reality only supports .tar.gz or .tgz archives. Either the documentation or the code needs to be updated. ",1,support plain gzip in unarchive issue type feature idea component name unarchive module ansible version n a summary if an administrator needs to distribute a single file in compressed form she may use gzip but would have no reason to use tar since there are not multiple files involved unarchive claims to support gzip archives but in reality only supports tar gz or tgz archives either the documentation or the code needs to be updated ,1 800,4417453085.0,IssuesEvent,2016-08-15 05:26:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cron module erases other cronjobs of user root and puts cronjobs of other users in /var/cron/tabs/root,bug_report waiting_on_maintainer,"This issue is not found yet in bug tracker ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cron module ##### ANSIBLE VERSION ansible 2.0.1.0 config file = /usr/local/etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION forks = 200 ssh_args = -o ControlMaster=auto -o ControlPersist=60s pipelining = True ##### OS / ENVIRONMENT FreeBSD 10.x ##### SUMMARY cron module erases other cronjobs of definite user root and puts cronjobs of other users in /var/cron/tabs/root. ##### STEPS TO REPRODUCE This problem meets at 2.0.1.0 only. Example: ``` $ less unixadm/roles/host-monitoring/tasks/main.yml ... - cron: name=""raidstat"" minute=""*/15"" user=root job=""/usr/local/sbin/raidstat -w >/dev/null 2>&1"" tags: host-monitoring become: yes - cron: name=""check snmpd"" minute=""*/5"" user=root job=""/usr/local/sbin/check_snmpd >/dev/null 2>&1"" tags: host-monitoring become: yes - cron: name=""tcp_states"" minute=""*/5"" user=snmp job=""/usr/local/sbin/tcp_states > /tmp/tcp_states"" tags: host-monitoring become: yes ... playbook: $ ansible-playbook -i hosts_store unixadm/store.yml --tags='common,host-monitoring' -l 'store6*' ``` ##### EXPECTED RESULTS ``` # crontab -l #Ansible: freebsd-update 0 14 * * * /usr/sbin/freebsd-update -f /usr/local/etc/freebsd-update.conf -t /dev/null cron #Ansible: raidstat */15 * * * * /usr/local/sbin/raidstat -w >/dev/null 2>&1 #Ansible: check snmpd */5 * * * * /usr/local/sbin/check_snmpd >/dev/null 2>&1 # crontab -l -u snmp #Ansible: tcp_states */5 * * * * /usr/local/sbin/tcp_states > /tmp/tcp_states ``` ##### ACTUAL RESULTS ``` # crontab -l -u root #Ansible: tcp_states */5 * * * * /usr/local/sbin/tcp_states > /tmp/tcp_states ``` We analized code of cron.py, making diffs with 2.0.0.2 version: ``` --- ansible-2.0.0.2/lib/ansible/modules/core/system/cron.py 2016-01-15 01:33:30.000000000 +0300 +++ ansible-2.0.1.0/lib/ansible/modules/core/system/cron.py 2016-02-25 06:00:58.000000000 +0300 @@ -45,7 +45,7 @@ options: name: description: - - Description of a crontab entry. + - Description of a crontab entry. Required if state=absent default: null required: false user: @@ -383,7 +383,7 @@ return ""chown %s %s ; su '%s' -c '%s %s'"" % (pipes.quote(self.user), pipes.quote(path), pipes.quote(self.user), CRONCMD, pipes.quote(path)) else: user = '-u %s' % pipes.quote(self.user) - return ""%s %s %s"" % (CRONCMD , user, pipes.quote(path)) + return ""%s %s %s"" % (CRONCMD , pipes.quote(path), user) ``` Then, we changed one line in new cron.py, and now it works! Please fix this ASAP. ",True,"cron module erases other cronjobs of user root and puts cronjobs of other users in /var/cron/tabs/root - This issue is not found yet in bug tracker ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cron module ##### ANSIBLE VERSION ansible 2.0.1.0 config file = /usr/local/etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION forks = 200 ssh_args = -o ControlMaster=auto -o ControlPersist=60s pipelining = True ##### OS / ENVIRONMENT FreeBSD 10.x ##### SUMMARY cron module erases other cronjobs of definite user root and puts cronjobs of other users in /var/cron/tabs/root. ##### STEPS TO REPRODUCE This problem meets at 2.0.1.0 only. Example: ``` $ less unixadm/roles/host-monitoring/tasks/main.yml ... - cron: name=""raidstat"" minute=""*/15"" user=root job=""/usr/local/sbin/raidstat -w >/dev/null 2>&1"" tags: host-monitoring become: yes - cron: name=""check snmpd"" minute=""*/5"" user=root job=""/usr/local/sbin/check_snmpd >/dev/null 2>&1"" tags: host-monitoring become: yes - cron: name=""tcp_states"" minute=""*/5"" user=snmp job=""/usr/local/sbin/tcp_states > /tmp/tcp_states"" tags: host-monitoring become: yes ... playbook: $ ansible-playbook -i hosts_store unixadm/store.yml --tags='common,host-monitoring' -l 'store6*' ``` ##### EXPECTED RESULTS ``` # crontab -l #Ansible: freebsd-update 0 14 * * * /usr/sbin/freebsd-update -f /usr/local/etc/freebsd-update.conf -t /dev/null cron #Ansible: raidstat */15 * * * * /usr/local/sbin/raidstat -w >/dev/null 2>&1 #Ansible: check snmpd */5 * * * * /usr/local/sbin/check_snmpd >/dev/null 2>&1 # crontab -l -u snmp #Ansible: tcp_states */5 * * * * /usr/local/sbin/tcp_states > /tmp/tcp_states ``` ##### ACTUAL RESULTS ``` # crontab -l -u root #Ansible: tcp_states */5 * * * * /usr/local/sbin/tcp_states > /tmp/tcp_states ``` We analized code of cron.py, making diffs with 2.0.0.2 version: ``` --- ansible-2.0.0.2/lib/ansible/modules/core/system/cron.py 2016-01-15 01:33:30.000000000 +0300 +++ ansible-2.0.1.0/lib/ansible/modules/core/system/cron.py 2016-02-25 06:00:58.000000000 +0300 @@ -45,7 +45,7 @@ options: name: description: - - Description of a crontab entry. + - Description of a crontab entry. Required if state=absent default: null required: false user: @@ -383,7 +383,7 @@ return ""chown %s %s ; su '%s' -c '%s %s'"" % (pipes.quote(self.user), pipes.quote(path), pipes.quote(self.user), CRONCMD, pipes.quote(path)) else: user = '-u %s' % pipes.quote(self.user) - return ""%s %s %s"" % (CRONCMD , user, pipes.quote(path)) + return ""%s %s %s"" % (CRONCMD , pipes.quote(path), user) ``` Then, we changed one line in new cron.py, and now it works! Please fix this ASAP. ",1,cron module erases other cronjobs of user root and puts cronjobs of other users in var cron tabs root this issue is not found yet in bug tracker issue type bug report component name cron module ansible version ansible config file usr local etc ansible ansible cfg configured module search path default w o overrides configuration forks ssh args o controlmaster auto o controlpersist pipelining true os environment freebsd x summary cron module erases other cronjobs of definite user root and puts cronjobs of other users in var cron tabs root steps to reproduce this problem meets at only example less unixadm roles host monitoring tasks main yml cron name raidstat minute user root job usr local sbin raidstat w dev null tags host monitoring become yes cron name check snmpd minute user root job usr local sbin check snmpd dev null tags host monitoring become yes cron name tcp states minute user snmp job usr local sbin tcp states tmp tcp states tags host monitoring become yes playbook ansible playbook i hosts store unixadm store yml tags common host monitoring l expected results crontab l ansible freebsd update usr sbin freebsd update f usr local etc freebsd update conf t dev null cron ansible raidstat usr local sbin raidstat w dev null ansible check snmpd usr local sbin check snmpd dev null crontab l u snmp ansible tcp states usr local sbin tcp states tmp tcp states actual results crontab l u root ansible tcp states usr local sbin tcp states tmp tcp states we analized code of cron py making diffs with version ansible lib ansible modules core system cron py ansible lib ansible modules core system cron py options name description description of a crontab entry description of a crontab entry required if state absent default null required false user return chown s s su s c s s pipes quote self user pipes quote path pipes quote self user croncmd pipes quote path else user u s pipes quote self user return s s s croncmd user pipes quote path return s s s croncmd pipes quote path user then we changed one line in new cron py and now it works please fix this asap ,1 1042,4846787800.0,IssuesEvent,2016-11-10 13:01:44,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Missing documentation for ""profile"" option of ""rds"" Amazon module.",affects_2.3 aws cloud docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ansible-modules-core/cloud/amazon/rds.py ##### ANSIBLE VERSION all? ##### CONFIGURATION n/a ##### SUMMARY The documentation for ""profile"" option (and probably other options) is not shown in the current documentation of [RDS Amazon module](http://docs.ansible.com/ansible/rds_module.html). Probably related to missing `extends_documentation_fragment: aws` and/or `extends_documentation_fragment: ec2` in [cloud/amazon/rds.py](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/rds.py#L223-L227)? ",True,"Missing documentation for ""profile"" option of ""rds"" Amazon module. - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ansible-modules-core/cloud/amazon/rds.py ##### ANSIBLE VERSION all? ##### CONFIGURATION n/a ##### SUMMARY The documentation for ""profile"" option (and probably other options) is not shown in the current documentation of [RDS Amazon module](http://docs.ansible.com/ansible/rds_module.html). Probably related to missing `extends_documentation_fragment: aws` and/or `extends_documentation_fragment: ec2` in [cloud/amazon/rds.py](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/rds.py#L223-L227)? ",1,missing documentation for profile option of rds amazon module issue type documentation report component name ansible modules core cloud amazon rds py ansible version all configuration n a summary the documentation for profile option and probably other options is not shown in the current documentation of probably related to missing extends documentation fragment aws and or extends documentation fragment in ,1 1684,6574154670.0,IssuesEvent,2017-09-11 11:44:06,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ios_command module fails with ""msg"": ""matched error in response: ...""",affects_2.3 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command ##### ANSIBLE VERSION ``` ansible 2.3.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY When running the ios_command module with ""show ip bgp"" on a Cisco 6500 the module fails with; ""msg"": ""matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"" } There is a difference in the output and Ansible seems to react to the output from the Cisco 6500 containing some keywords that causes it to believe the command failed. Actual output from a Cisco 6500; BGP table version is 61521, local router ID is x.x.x.x Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter, x best-external, a additional-path, c RIB-compressed, Origin codes: i - IGP, e - EGP, ? - incomplete RPKI validation codes: V valid, I invalid, N Not found --- output cut --- Same command from a Cisco 3750(works); BGP table version is 789, local router ID is 185.25.44.53 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale Origin codes: i - IGP, e - EGP, ? - incomplete --- output cut ---- ##### STEPS TO REPRODUCE Running this playbook towards a Cisco 6500 ``` --- - hosts: [prod_rt] connection: local gather_facts: no tasks: - name: Run command ios_command: host: ""{{ inventory_hostname }}"" username: ""{{ username }}"" password: ""{{ password }}"" commands: - 'show ip bgp' register: output - debug: msg={{ output.stdout_lines }} ``` ##### EXPECTED RESULTS Running this command on another router(tried on 3750) works and shows the router output as intended. ##### ACTUAL RESULTS Fails with; ""msg"": ""matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"" } ``` pa@PA-Hanssons-MacBook-Pro:~/PycharmProjects/network_ansible$ ansible-playbook show_ip_bgp_test.yml -l rt1.age -vvv Using /Users/pa/PycharmProjects/network_ansible/ansible.cfg as config file PLAYBOOK: show_ip_bgp_test.yml ************************************************* 1 plays in show_ip_bgp_test.yml PLAY [prod_rt] ***************************************************************** TASK [Run command] ************************************************************* task path: /Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.yml:7 Using module file /Library/Python/2.7/site-packages/ansible/modules/core/network/ios/ios_command.py ESTABLISH LOCAL CONNECTION FOR USER: pa EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `"" && echo ansible-tmp-1479388715.07-39385131693979=""` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `"" ) && sleep 0' PUT /var/folders/d1/xrrfzjfd52n_9rrtyl2wckl40000gp/T/tmp6bkoai TO /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py EXEC /bin/sh -c 'chmod u+x /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py; rm -rf ""/Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/"" > /dev/null 2>&1 && sleep 0' fatal: [rt1.age]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""show ip bgp"" ], ""host"": ""rt1.age"", ""interval"": 1, ""match"": ""all"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""retries"": 10, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": null, ""use_ssl"": true, ""username"": """", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""ios_command"" }, ""msg"": ""matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"" } to retry, use: --limit @/Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.retry PLAY RECAP ********************************************************************* rt1.age : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"ios_command module fails with ""msg"": ""matched error in response: ..."" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command ##### ANSIBLE VERSION ``` ansible 2.3.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY When running the ios_command module with ""show ip bgp"" on a Cisco 6500 the module fails with; ""msg"": ""matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"" } There is a difference in the output and Ansible seems to react to the output from the Cisco 6500 containing some keywords that causes it to believe the command failed. Actual output from a Cisco 6500; BGP table version is 61521, local router ID is x.x.x.x Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter, x best-external, a additional-path, c RIB-compressed, Origin codes: i - IGP, e - EGP, ? - incomplete RPKI validation codes: V valid, I invalid, N Not found --- output cut --- Same command from a Cisco 3750(works); BGP table version is 789, local router ID is 185.25.44.53 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale Origin codes: i - IGP, e - EGP, ? - incomplete --- output cut ---- ##### STEPS TO REPRODUCE Running this playbook towards a Cisco 6500 ``` --- - hosts: [prod_rt] connection: local gather_facts: no tasks: - name: Run command ios_command: host: ""{{ inventory_hostname }}"" username: ""{{ username }}"" password: ""{{ password }}"" commands: - 'show ip bgp' register: output - debug: msg={{ output.stdout_lines }} ``` ##### EXPECTED RESULTS Running this command on another router(tried on 3750) works and shows the router output as intended. ##### ACTUAL RESULTS Fails with; ""msg"": ""matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"" } ``` pa@PA-Hanssons-MacBook-Pro:~/PycharmProjects/network_ansible$ ansible-playbook show_ip_bgp_test.yml -l rt1.age -vvv Using /Users/pa/PycharmProjects/network_ansible/ansible.cfg as config file PLAYBOOK: show_ip_bgp_test.yml ************************************************* 1 plays in show_ip_bgp_test.yml PLAY [prod_rt] ***************************************************************** TASK [Run command] ************************************************************* task path: /Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.yml:7 Using module file /Library/Python/2.7/site-packages/ansible/modules/core/network/ios/ios_command.py ESTABLISH LOCAL CONNECTION FOR USER: pa EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `"" && echo ansible-tmp-1479388715.07-39385131693979=""` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `"" ) && sleep 0' PUT /var/folders/d1/xrrfzjfd52n_9rrtyl2wckl40000gp/T/tmp6bkoai TO /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py EXEC /bin/sh -c 'chmod u+x /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py; rm -rf ""/Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/"" > /dev/null 2>&1 && sleep 0' fatal: [rt1.age]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""commands"": [ ""show ip bgp"" ], ""host"": ""rt1.age"", ""interval"": 1, ""match"": ""all"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""retries"": 10, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": null, ""use_ssl"": true, ""username"": """", ""validate_certs"": true, ""wait_for"": null }, ""module_name"": ""ios_command"" }, ""msg"": ""matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"" } to retry, use: --limit @/Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.retry PLAY RECAP ********************************************************************* rt1.age : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,ios command module fails with msg matched error in response issue type bug report component name ios command ansible version ansible configuration os environment n a summary when running the ios command module with show ip bgp on a cisco the module fails with msg matched error in response up path f rt filter r n x best external a additional path c rib compressed r norigin codes i igp e egp incomplete r nrpki validation codes v valid i invalid n not found r n r n there is a difference in the output and ansible seems to react to the output from the cisco containing some keywords that causes it to believe the command failed actual output from a cisco bgp table version is local router id is x x x x status codes s suppressed d damped h history valid best i internal r rib failure s stale m multipath b backup path f rt filter x best external a additional path c rib compressed origin codes i igp e egp incomplete rpki validation codes v valid i invalid n not found output cut same command from a cisco works bgp table version is local router id is status codes s suppressed d damped h history valid best i internal r rib failure s stale origin codes i igp e egp incomplete output cut steps to reproduce running this playbook towards a cisco hosts connection local gather facts no tasks name run command ios command host inventory hostname username username password password commands show ip bgp register output debug msg output stdout lines expected results running this command on another router tried on works and shows the router output as intended actual results fails with msg matched error in response up path f rt filter r n x best external a additional path c rib compressed r norigin codes i igp e egp incomplete r nrpki validation codes v valid i invalid n not found r n r n pa pa hanssons macbook pro pycharmprojects network ansible ansible playbook show ip bgp test yml l age vvv using users pa pycharmprojects network ansible ansible cfg as config file playbook show ip bgp test yml plays in show ip bgp test yml play task task path users pa pycharmprojects network ansible show ip bgp test yml using module file library python site packages ansible modules core network ios ios command py establish local connection for user pa exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t to users pa ansible tmp ansible tmp ios command py exec bin sh c chmod u x users pa ansible tmp ansible tmp users pa ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python users pa ansible tmp ansible tmp ios command py rm rf users pa ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args auth pass null authorize false commands show ip bgp host age interval match all password value specified in no log parameter port null provider null retries ssh keyfile null timeout transport null use ssl true username validate certs true wait for null module name ios command msg matched error in response up path f rt filter r n x best external a additional path c rib compressed r norigin codes i igp e egp incomplete r nrpki validation codes v valid i invalid n not found r n r n to retry use limit users pa pycharmprojects network ansible show ip bgp test retry play recap age ok changed unreachable failed ,1 808,4425771330.0,IssuesEvent,2016-08-16 16:20:22,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,AttributeError: 'DockerManager' object has no attribute 'client',bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION ##### STEPS TO REPRODUCE ``` - name: run the site in a docker container docker: name: app_test env_file: /opt/app/env.conf publish_all_ports: yes cap_add: - ""SYS_PTRACE"" tty: yes detach: yes volumes: ""/usr/share/GeoIP/GeoLiteCity.dat:/usr/share/GeoIP/GeoLiteCity.dat"" image: ""app_test:{{ RUBY_SEMVER }}"" state: started when: RUBY_SEMVER is defined ``` ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 1975, in main() File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 1912, in main manager = DockerManager(module) File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 749, in __init__ self.environment = self.get_environment(env, env_file) File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 895, in get_environment self.ensure_capability('env_file') File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 870, in ensure_capability self._check_capabilities() File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 853, in _check_capabilities api_version = self.client.version()['ApiVersion'] AttributeError: 'DockerManager' object has no attribute 'client' fatal: [srv-1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""docker""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 1975, in \n main()\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 1912, in main\n manager = DockerManager(module)\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 749, in __init__\n self.environment = self.get_environment(env, env_file)\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 895, in get_environment\n self.ensure_capability('env_file')\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 870, in ensure_capability\n self._check_capabilities()\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 853, in _check_capabilities\n api_version = self.client.version()['ApiVersion']\nAttributeError: 'DockerManager' object has no attribute 'client'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"AttributeError: 'DockerManager' object has no attribute 'client' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION ##### STEPS TO REPRODUCE ``` - name: run the site in a docker container docker: name: app_test env_file: /opt/app/env.conf publish_all_ports: yes cap_add: - ""SYS_PTRACE"" tty: yes detach: yes volumes: ""/usr/share/GeoIP/GeoLiteCity.dat:/usr/share/GeoIP/GeoLiteCity.dat"" image: ""app_test:{{ RUBY_SEMVER }}"" state: started when: RUBY_SEMVER is defined ``` ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 1975, in main() File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 1912, in main manager = DockerManager(module) File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 749, in __init__ self.environment = self.get_environment(env, env_file) File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 895, in get_environment self.ensure_capability('env_file') File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 870, in ensure_capability self._check_capabilities() File ""/tmp/ansible_CBwbjg/ansible_module_docker.py"", line 853, in _check_capabilities api_version = self.client.version()['ApiVersion'] AttributeError: 'DockerManager' object has no attribute 'client' fatal: [srv-1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""docker""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 1975, in \n main()\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 1912, in main\n manager = DockerManager(module)\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 749, in __init__\n self.environment = self.get_environment(env, env_file)\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 895, in get_environment\n self.ensure_capability('env_file')\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 870, in ensure_capability\n self._check_capabilities()\n File \""/tmp/ansible_CBwbjg/ansible_module_docker.py\"", line 853, in _check_capabilities\n api_version = self.client.version()['ApiVersion']\nAttributeError: 'DockerManager' object has no attribute 'client'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,attributeerror dockermanager object has no attribute client issue type bug report component name docker module ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables steps to reproduce name run the site in a docker container docker name app test env file opt app env conf publish all ports yes cap add sys ptrace tty yes detach yes volumes usr share geoip geolitecity dat usr share geoip geolitecity dat image app test ruby semver state started when ruby semver is defined actual results an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible cbwbjg ansible module docker py line in main file tmp ansible cbwbjg ansible module docker py line in main manager dockermanager module file tmp ansible cbwbjg ansible module docker py line in init self environment self get environment env env file file tmp ansible cbwbjg ansible module docker py line in get environment self ensure capability env file file tmp ansible cbwbjg ansible module docker py line in ensure capability self check capabilities file tmp ansible cbwbjg ansible module docker py line in check capabilities api version self client version attributeerror dockermanager object has no attribute client fatal failed changed false failed true invocation module name docker module stderr traceback most recent call last n file tmp ansible cbwbjg ansible module docker py line in n main n file tmp ansible cbwbjg ansible module docker py line in main n manager dockermanager module n file tmp ansible cbwbjg ansible module docker py line in init n self environment self get environment env env file n file tmp ansible cbwbjg ansible module docker py line in get environment n self ensure capability env file n file tmp ansible cbwbjg ansible module docker py line in ensure capability n self check capabilities n file tmp ansible cbwbjg ansible module docker py line in check capabilities n api version self client version nattributeerror dockermanager object has no attribute client n module stdout msg module failure parsed false ,1 892,4553457459.0,IssuesEvent,2016-09-13 04:57:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Bad value substitution in ini_file Module with percent placeholder values,affects_1.7 bug_report P3 waiting_on_maintainer,"##### Issue Type: Bug ##### Component Name: ini_file module ##### Ansible Version: 1.7.2 1.8.2 ##### Environment: Mac OSX 10.10.1 Yosemite ##### Summary: I need special values in an ini-file to configure my supervisor-daemon. For example a value looks like this: process_name=%(program_name)s ##### Steps To Reproduce: To reporduce this issue run following playbook twice. ``` --- - hosts: all tasks: - ini_file: dest=""/tmp/tmp.ini"" section=""program:update"" option=""process_name"" value=""%(program_name)s"" ``` ##### Expected Results: After first run everything OK, after second run I get an error and nothing happens. ##### Actual Results: ``` ConfigParser.InterpolationMissingOptionError: Bad value substitution: section: [program:update] option : process_name key : program_name rawval : %(program_name)s ```",True,"Bad value substitution in ini_file Module with percent placeholder values - ##### Issue Type: Bug ##### Component Name: ini_file module ##### Ansible Version: 1.7.2 1.8.2 ##### Environment: Mac OSX 10.10.1 Yosemite ##### Summary: I need special values in an ini-file to configure my supervisor-daemon. For example a value looks like this: process_name=%(program_name)s ##### Steps To Reproduce: To reporduce this issue run following playbook twice. ``` --- - hosts: all tasks: - ini_file: dest=""/tmp/tmp.ini"" section=""program:update"" option=""process_name"" value=""%(program_name)s"" ``` ##### Expected Results: After first run everything OK, after second run I get an error and nothing happens. ##### Actual Results: ``` ConfigParser.InterpolationMissingOptionError: Bad value substitution: section: [program:update] option : process_name key : program_name rawval : %(program_name)s ```",1,bad value substitution in ini file module with percent placeholder values issue type bug component name ini file module ansible version environment mac osx yosemite summary i need special values in an ini file to configure my supervisor daemon for example a value looks like this process name program name s steps to reproduce to reporduce this issue run following playbook twice hosts all tasks ini file dest tmp tmp ini section program update option process name value program name s expected results after first run everything ok after second run i get an error and nothing happens actual results configparser interpolationmissingoptionerror bad value substitution section option process name key program name rawval program name s ,1 1815,6577317912.0,IssuesEvent,2017-09-12 00:04:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,AWS - ec2 should support unique instance criteria by VPC/Subnet,affects_2.1 aws bug_report cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Bug Report/Feature Idea ##### COMPONENT NAME Module: ec2 ##### ANSIBLE VERSION 2.1.0.0 (but happens in older versions) ##### CONFIGURATION n/a ##### OS / ENVIRONMENT n/a ##### SUMMARY ec2 module should support indempodence by VPC/Subnet ##### STEPS TO REPRODUCE - Create playbook which creates an EC2 instance into a subnet (exact_count = 1) - Re-run -- notice it does not re-create the instance as expected - Modify playbook, add second EC2 instance -- same exact parameters including name but different VPC subnet - Re-run -- notice it does NOT create the new instance even though it should be located in a different subnet or VPC altogether - Modify playbook, modify second EC2 instance -- change name of the instance - Re-run -- notice it does create the new instance because of a different name altogether ##### EXPECTED RESULTS Ansible should have created a second EC2 instance EC2 should respect vpc_subnet_id as unique criteria ##### ACTUAL RESULTS Ansible did NOT create the second new instance in a different subnet because of the same parameters. ",True,"AWS - ec2 should support unique instance criteria by VPC/Subnet - ##### ISSUE TYPE - Bug Report/Feature Idea ##### COMPONENT NAME Module: ec2 ##### ANSIBLE VERSION 2.1.0.0 (but happens in older versions) ##### CONFIGURATION n/a ##### OS / ENVIRONMENT n/a ##### SUMMARY ec2 module should support indempodence by VPC/Subnet ##### STEPS TO REPRODUCE - Create playbook which creates an EC2 instance into a subnet (exact_count = 1) - Re-run -- notice it does not re-create the instance as expected - Modify playbook, add second EC2 instance -- same exact parameters including name but different VPC subnet - Re-run -- notice it does NOT create the new instance even though it should be located in a different subnet or VPC altogether - Modify playbook, modify second EC2 instance -- change name of the instance - Re-run -- notice it does create the new instance because of a different name altogether ##### EXPECTED RESULTS Ansible should have created a second EC2 instance EC2 should respect vpc_subnet_id as unique criteria ##### ACTUAL RESULTS Ansible did NOT create the second new instance in a different subnet because of the same parameters. ",1,aws should support unique instance criteria by vpc subnet issue type bug report feature idea component name module ansible version but happens in older versions configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary module should support indempodence by vpc subnet steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create playbook which creates an instance into a subnet exact count re run notice it does not re create the instance as expected modify playbook add second instance same exact parameters including name but different vpc subnet re run notice it does not create the new instance even though it should be located in a different subnet or vpc altogether modify playbook modify second instance change name of the instance re run notice it does create the new instance because of a different name altogether expected results ansible should have created a second instance should respect vpc subnet id as unique criteria actual results ansible did not create the second new instance in a different subnet because of the same parameters ,1 1752,6574969094.0,IssuesEvent,2017-09-11 14:38:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"declare a ""port"" parameter for os_security_group_rule",affects_2.3 cloud feature_idea openstack waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_security_group_rule I end up with a lot of ""os_security_group_rule:"" statements that have: port_range_max: ""{{ item }}"" port_range_min: ""{{ item }}"" For rules where only a single port is needed. Is there some reason not to define a ""port:"" parameter that if that has been passed set port_range_min and port_range_max to that value? That should pass the right things down the chain through shade to openstack. ",True,"declare a ""port"" parameter for os_security_group_rule - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_security_group_rule I end up with a lot of ""os_security_group_rule:"" statements that have: port_range_max: ""{{ item }}"" port_range_min: ""{{ item }}"" For rules where only a single port is needed. Is there some reason not to define a ""port:"" parameter that if that has been passed set port_range_min and port_range_max to that value? That should pass the right things down the chain through shade to openstack. ",1,declare a port parameter for os security group rule issue type feature idea component name os security group rule i end up with a lot of os security group rule statements that have port range max item port range min item for rules where only a single port is needed is there some reason not to define a port parameter that if that has been passed set port range min and port range max to that value that should pass the right things down the chain through shade to openstack ,1 1732,6574849475.0,IssuesEvent,2017-09-11 14:16:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt_repository: doc generation confused by file mode - interpret it as an octal value,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt_repository ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ( Debian stretch , irrelevant here) ##### SUMMARY The ansible docs website tells to set '420' as default mode for apt sources.list.d files. Which is wrong per debian apt breaks on this. Luckily this is only in the generated documentation, https://docs.ansible.com/ansible/apt_repository_module.html#options the documentation from the sources is fine but misses quotes around '0644' mode value so the docs web page generation does not interpret this as an octal value (0644 octal equals 420 decimal). https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/apt_repository.py#L48 ##### STEPS TO REPRODUCE ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options ``` ##### EXPECTED RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 0644 as default mode value for apt sources. ``` ##### ACTUAL RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 420 as default mode value for apt sources. ``` ",True,"apt_repository: doc generation confused by file mode - interpret it as an octal value - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt_repository ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ( Debian stretch , irrelevant here) ##### SUMMARY The ansible docs website tells to set '420' as default mode for apt sources.list.d files. Which is wrong per debian apt breaks on this. Luckily this is only in the generated documentation, https://docs.ansible.com/ansible/apt_repository_module.html#options the documentation from the sources is fine but misses quotes around '0644' mode value so the docs web page generation does not interpret this as an octal value (0644 octal equals 420 decimal). https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/apt_repository.py#L48 ##### STEPS TO REPRODUCE ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options ``` ##### EXPECTED RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 0644 as default mode value for apt sources. ``` ##### ACTUAL RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 420 as default mode value for apt sources. ``` ",1,apt repository doc generation confused by file mode interpret it as an octal value issue type bug report component name apt repository ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment debian stretch irrelevant here summary the ansible docs website tells to set as default mode for apt sources list d files which is wrong per debian apt breaks on this luckily this is only in the generated documentation the documentation from the sources is fine but misses quotes around mode value so the docs web page generation does not interpret this as an octal value octal equals decimal steps to reproduce open expected results open and read as default mode value for apt sources actual results open and read as default mode value for apt sources ,1 1677,6574117379.0,IssuesEvent,2017-09-11 11:33:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,yum: different results on CentOS vs RHEL,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION I am calling the ANSIBLE_ROLES_PATH environment variable during execution. ##### OS / ENVIRONMENT Running from: **Fedora release 23 (Twenty Three)** Managing: 1. **CentOS Linux release 7.2.1511 (Core)** 2. **Red Hat Enterprise Linux Server release 7.3 (Maipo)** ##### SUMMARY A playbook, which installs some yum packages, succeeds on **CentOS Linux release 7.2.1511 (Core)** but fails on **Red Hat Enterprise Linux Server release 7.3 (Maipo)**. I have confirmed that if I run `yum install ` manually on both the CentOS and RHEL machines, the installation works on both machines as expected. ##### STEPS TO REPRODUCE 1. Create the playbook as shown below 2. Run the playbook on a CentOS & RHEL machine 3. Installation will run flawlessly on the CentOS machine 4. Installation will fail on the RHEL machine due to `No package matching ` If I remove the following packages from the with_items list 1. libsemanage-python 2. policycoreutils-python 3. setroubleshoot 4. tmux And run the playbook again on CentOS & RHEL, it will run flawlessly on both of the machines. Another scenario that works around the problem is is breaking writing the yum module four times, one time for each of the four packages stated above. Installing them individually works on both CentOS & RHEL. ``` -- - name: Install base applications - hosts: redhat yum: name={{ item }} state=present with_items: - vim-enhanced - gcc - wget - curl - chrony - bzip2 - libsemanage-python - policycoreutils-python - setroubleshoot - firewalld - tmux ``` ##### EXPECTED RESULTS ``` changed: [CentOS7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) changed: [RHEL7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) ``` ##### ACTUAL RESULTS ``` failed: [RHEL7] (item=[u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [ ""vim-enhanced"", ""gcc"", ""wget"", ""curl"", ""chrony"", ""bzip2"", ""libsemanage-python"", ""policycoreutils-python"", ""setroubleshoot"", ""firewalld"", ""tmux"" ], ""state"": ""present"", ""update_cache"": false, ""validate_certs"": true }, ""module_name"": ""yum"" }, ""item"": [ ""vim-enhanced"", ""gcc"", ""wget"", ""curl"", ""chrony"", ""bzip2"", ""libsemanage-python"", ""policycoreutils-python"", ""setroubleshoot"", ""firewalld"", ""tmux"" ], ""msg"": ""No package matching 'libsemanage-python' found available, installed or updated"", ""rc"": 126, ""results"": [ ""vim-enhanced-7.4.160-1.el7.x86_64 providing vim-enhanced is already installed"", ""gcc-4.8.5-11.el7.x86_64 providing gcc is already installed"", ""wget-1.14-13.el7.x86_64 providing wget is already installed"", ""curl-7.29.0-35.el7.x86_64 providing curl is already installed"", ""chrony-2.1.1-3.el7.x86_64 providing chrony is already installed"", ""bzip2-1.0.6-13.el7.x86_64 providing bzip2 is already installed"", ""No package matching 'libsemanage-python' found available, installed or updated"" ] } ``` ",True,"yum: different results on CentOS vs RHEL - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION I am calling the ANSIBLE_ROLES_PATH environment variable during execution. ##### OS / ENVIRONMENT Running from: **Fedora release 23 (Twenty Three)** Managing: 1. **CentOS Linux release 7.2.1511 (Core)** 2. **Red Hat Enterprise Linux Server release 7.3 (Maipo)** ##### SUMMARY A playbook, which installs some yum packages, succeeds on **CentOS Linux release 7.2.1511 (Core)** but fails on **Red Hat Enterprise Linux Server release 7.3 (Maipo)**. I have confirmed that if I run `yum install ` manually on both the CentOS and RHEL machines, the installation works on both machines as expected. ##### STEPS TO REPRODUCE 1. Create the playbook as shown below 2. Run the playbook on a CentOS & RHEL machine 3. Installation will run flawlessly on the CentOS machine 4. Installation will fail on the RHEL machine due to `No package matching ` If I remove the following packages from the with_items list 1. libsemanage-python 2. policycoreutils-python 3. setroubleshoot 4. tmux And run the playbook again on CentOS & RHEL, it will run flawlessly on both of the machines. Another scenario that works around the problem is is breaking writing the yum module four times, one time for each of the four packages stated above. Installing them individually works on both CentOS & RHEL. ``` -- - name: Install base applications - hosts: redhat yum: name={{ item }} state=present with_items: - vim-enhanced - gcc - wget - curl - chrony - bzip2 - libsemanage-python - policycoreutils-python - setroubleshoot - firewalld - tmux ``` ##### EXPECTED RESULTS ``` changed: [CentOS7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) changed: [RHEL7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) ``` ##### ACTUAL RESULTS ``` failed: [RHEL7] (item=[u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [ ""vim-enhanced"", ""gcc"", ""wget"", ""curl"", ""chrony"", ""bzip2"", ""libsemanage-python"", ""policycoreutils-python"", ""setroubleshoot"", ""firewalld"", ""tmux"" ], ""state"": ""present"", ""update_cache"": false, ""validate_certs"": true }, ""module_name"": ""yum"" }, ""item"": [ ""vim-enhanced"", ""gcc"", ""wget"", ""curl"", ""chrony"", ""bzip2"", ""libsemanage-python"", ""policycoreutils-python"", ""setroubleshoot"", ""firewalld"", ""tmux"" ], ""msg"": ""No package matching 'libsemanage-python' found available, installed or updated"", ""rc"": 126, ""results"": [ ""vim-enhanced-7.4.160-1.el7.x86_64 providing vim-enhanced is already installed"", ""gcc-4.8.5-11.el7.x86_64 providing gcc is already installed"", ""wget-1.14-13.el7.x86_64 providing wget is already installed"", ""curl-7.29.0-35.el7.x86_64 providing curl is already installed"", ""chrony-2.1.1-3.el7.x86_64 providing chrony is already installed"", ""bzip2-1.0.6-13.el7.x86_64 providing bzip2 is already installed"", ""No package matching 'libsemanage-python' found available, installed or updated"" ] } ``` ",1,yum different results on centos vs rhel issue type bug report component name yum ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables i am calling the ansible roles path environment variable during execution os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running from fedora release twenty three managing centos linux release core red hat enterprise linux server release maipo summary a playbook which installs some yum packages succeeds on centos linux release core but fails on red hat enterprise linux server release maipo i have confirmed that if i run yum install manually on both the centos and rhel machines the installation works on both machines as expected steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create the playbook as shown below run the playbook on a centos rhel machine installation will run flawlessly on the centos machine installation will fail on the rhel machine due to no package matching if i remove the following packages from the with items list libsemanage python policycoreutils python setroubleshoot tmux and run the playbook again on centos rhel it will run flawlessly on both of the machines another scenario that works around the problem is is breaking writing the yum module four times one time for each of the four packages stated above installing them individually works on both centos rhel name install base applications hosts redhat yum name item state present with items vim enhanced gcc wget curl chrony libsemanage python policycoreutils python setroubleshoot firewalld tmux expected results changed item changed item actual results failed item changed false failed true invocation module args conf file null disable gpg check false disablerepo null enablerepo null exclude null install repoquery true list null name vim enhanced gcc wget curl chrony libsemanage python policycoreutils python setroubleshoot firewalld tmux state present update cache false validate certs true module name yum item vim enhanced gcc wget curl chrony libsemanage python policycoreutils python setroubleshoot firewalld tmux msg no package matching libsemanage python found available installed or updated rc results vim enhanced providing vim enhanced is already installed gcc providing gcc is already installed wget providing wget is already installed curl providing curl is already installed chrony providing chrony is already installed providing is already installed no package matching libsemanage python found available installed or updated ,1 869,4536193279.0,IssuesEvent,2016-09-08 19:39:37,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,winrm NanoServer Unable to find type System.Security.Cryptography.SHA1CryptoServiceProvider,affects_2.1 bug_report waiting_on_maintainer windows," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_ping ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Ubuntu 16.04.1 LTS ##### SUMMARY Unable to establish winrm connection to new Windows Server Nano ##### STEPS TO REPRODUCE ansible windows -i inventory/host -m win_ping ``` ``` ##### EXPECTED RESULTS Establish winrm connection ##### ACTUAL RESULTS ``` Loaded callback minimal of type stdout, v2.0 <10.0.0.5> ESTABLISH WINRM CONNECTION FOR USER: xxxx on PORT 5986 TO 10.0.0.5 <10.0.0.5> EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1471290564.47-58300212757313"").FullName | Write-Host -Separator ''; <10.0.0.5> PUT ""/tmp/tmpiUuwzr"" TO ""C:\Users\mrembas\AppData\Local\Temp\ansible-tmp-1471290564.47-58300212757313\win_ping.ps1"" 10.0.0.5 | FAILED! => { ""failed"": true, ""msg"": ""#< CLIXML\r\nUnable to find type [System.Security.Cryptography.SHA1CryptoServiceProvider]._x000D__x000A_At line:7 char:9_x000D__x000A_+ $sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Cre ..._x000D__x000A_+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~_x000D__x000A_ + CategoryInfo : InvalidOperation: (System.Security...ServiceProv _x000D__x000A_ ider:TypeName) [], ParentContainsErrorRecordException_x000D__x000A_ + FullyQualifiedErrorId : TypeNotFound_x000D__x000A_ _x000D__x000A_"" } ``` ",True,"winrm NanoServer Unable to find type System.Security.Cryptography.SHA1CryptoServiceProvider - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_ping ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Ubuntu 16.04.1 LTS ##### SUMMARY Unable to establish winrm connection to new Windows Server Nano ##### STEPS TO REPRODUCE ansible windows -i inventory/host -m win_ping ``` ``` ##### EXPECTED RESULTS Establish winrm connection ##### ACTUAL RESULTS ``` Loaded callback minimal of type stdout, v2.0 <10.0.0.5> ESTABLISH WINRM CONNECTION FOR USER: xxxx on PORT 5986 TO 10.0.0.5 <10.0.0.5> EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1471290564.47-58300212757313"").FullName | Write-Host -Separator ''; <10.0.0.5> PUT ""/tmp/tmpiUuwzr"" TO ""C:\Users\mrembas\AppData\Local\Temp\ansible-tmp-1471290564.47-58300212757313\win_ping.ps1"" 10.0.0.5 | FAILED! => { ""failed"": true, ""msg"": ""#< CLIXML\r\nUnable to find type [System.Security.Cryptography.SHA1CryptoServiceProvider]._x000D__x000A_At line:7 char:9_x000D__x000A_+ $sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Cre ..._x000D__x000A_+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~_x000D__x000A_ + CategoryInfo : InvalidOperation: (System.Security...ServiceProv _x000D__x000A_ ider:TypeName) [], ParentContainsErrorRecordException_x000D__x000A_ + FullyQualifiedErrorId : TypeNotFound_x000D__x000A_ _x000D__x000A_"" } ``` ",1,winrm nanoserver unable to find type system security cryptography issue type bug report component name win ping ansible version ansible configuration default os environment ubuntu lts summary unable to establish winrm connection to new windows server nano steps to reproduce ansible windows i inventory host m win ping expected results establish winrm connection actual results loaded callback minimal of type stdout establish winrm connection for user xxxx on port to exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator put tmp tmpiuuwzr to c users mrembas appdata local temp ansible tmp win ping failed failed true msg unable to find type at line char cre categoryinfo invalidoperation system security serviceprov ider typename parentcontainserrorrecordexception fullyqualifiederrorid typenotfound ,1 1478,6412426005.0,IssuesEvent,2017-08-08 03:12:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Permissions issue when copying directory,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `copy` ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Absolutely nothing has changed in my config, but I _did_ upgrade Ansible from 2.0.2 right before the failure began. ##### OS / ENVIRONMENT I'm unning Ansible from MacOS Sierra (a recent upgrade) to bring up an Ubuntu 14.04 server in a Vagrant/Virtualbox environment. ##### SUMMARY I'm trying to copy a directory (and its files) from `/files` to the file system and set appropriate permissions. Ansible seems to think I'm using symbolic permissions when copying a directory. I was running the playbook just fine, but a user was reporting this error and that user was running v2.1.2 so I upgraded. After the upgrade, I was got the issue as well. ##### STEPS TO REPRODUCE As far as I can tell, just run the task below under Ansible 2.1.2. ``` - name: Dotfiles | Install ViM customizations become: yes become_user: ""{{ username }}"" copy: src: .vim dest: ~/ mode: 0664 directory_mode: 0775 force: yes ``` ##### EXPECTED RESULTS The directory should be copied and permissions set as specified. ##### ACTUAL RESULTS I get an error related to symbolic permissions. ``` TASK [user : Dotfiles | Install ViM customizations] **************************** fatal: [default]: FAILED! => {""changed"": false, ""checksum"": ""109d2e70b4a83619eec12768f976177e55168de1"", ""details"": ""bad symbolic permission for mode: 509"", ""failed"": true, ""gid"": 1000, ""group"": ""vagrant"", ""mode"": ""0775"", ""msg"": ""mode must be in octal or symbolic form"", ""owner"": ""vagrant"", ""path"": ""/home/vagrant/.vim"", ""size"": 4096, ""state"": ""directory"", ""uid"": 1000} ``` ",True,"Permissions issue when copying directory - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `copy` ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Absolutely nothing has changed in my config, but I _did_ upgrade Ansible from 2.0.2 right before the failure began. ##### OS / ENVIRONMENT I'm unning Ansible from MacOS Sierra (a recent upgrade) to bring up an Ubuntu 14.04 server in a Vagrant/Virtualbox environment. ##### SUMMARY I'm trying to copy a directory (and its files) from `/files` to the file system and set appropriate permissions. Ansible seems to think I'm using symbolic permissions when copying a directory. I was running the playbook just fine, but a user was reporting this error and that user was running v2.1.2 so I upgraded. After the upgrade, I was got the issue as well. ##### STEPS TO REPRODUCE As far as I can tell, just run the task below under Ansible 2.1.2. ``` - name: Dotfiles | Install ViM customizations become: yes become_user: ""{{ username }}"" copy: src: .vim dest: ~/ mode: 0664 directory_mode: 0775 force: yes ``` ##### EXPECTED RESULTS The directory should be copied and permissions set as specified. ##### ACTUAL RESULTS I get an error related to symbolic permissions. ``` TASK [user : Dotfiles | Install ViM customizations] **************************** fatal: [default]: FAILED! => {""changed"": false, ""checksum"": ""109d2e70b4a83619eec12768f976177e55168de1"", ""details"": ""bad symbolic permission for mode: 509"", ""failed"": true, ""gid"": 1000, ""group"": ""vagrant"", ""mode"": ""0775"", ""msg"": ""mode must be in octal or symbolic form"", ""owner"": ""vagrant"", ""path"": ""/home/vagrant/.vim"", ""size"": 4096, ""state"": ""directory"", ""uid"": 1000} ``` ",1,permissions issue when copying directory issue type bug report component name copy ansible version ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables absolutely nothing has changed in my config but i did upgrade ansible from right before the failure began os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific i m unning ansible from macos sierra a recent upgrade to bring up an ubuntu server in a vagrant virtualbox environment summary i m trying to copy a directory and its files from files to the file system and set appropriate permissions ansible seems to think i m using symbolic permissions when copying a directory i was running the playbook just fine but a user was reporting this error and that user was running so i upgraded after the upgrade i was got the issue as well steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used as far as i can tell just run the task below under ansible name dotfiles install vim customizations become yes become user username copy src vim dest mode directory mode force yes expected results the directory should be copied and permissions set as specified actual results i get an error related to symbolic permissions task fatal failed changed false checksum details bad symbolic permission for mode failed true gid group vagrant mode msg mode must be in octal or symbolic form owner vagrant path home vagrant vim size state directory uid ,1 1104,4981606644.0,IssuesEvent,2016-12-07 08:34:55,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Implement sysctl reload functionality,affects_2.3 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME sysctl ##### ANSIBLE VERSION N/A ##### SUMMARY As part of transforming all command/shell actions into proper modules, we have a need to reload sysctl (`command: sysctl -p`). The main use-case is to use this as a notification handler when we template/assemble the sysctl.conf file. ",True,"Implement sysctl reload functionality - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME sysctl ##### ANSIBLE VERSION N/A ##### SUMMARY As part of transforming all command/shell actions into proper modules, we have a need to reload sysctl (`command: sysctl -p`). The main use-case is to use this as a notification handler when we template/assemble the sysctl.conf file. ",1,implement sysctl reload functionality issue type feature idea component name sysctl ansible version n a summary as part of transforming all command shell actions into proper modules we have a need to reload sysctl command sysctl p the main use case is to use this as a notification handler when we template assemble the sysctl conf file ,1 1827,6577345978.0,IssuesEvent,2017-09-12 00:16:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_asg ignores replace_instances if lc_check is true,affects_2.0 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/ec2_asg ##### ANSIBLE VERSION ``` ansible 2.0.0.1 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Running `ec2_asg` with `replace_instances` set to a single instance and `lc_check` set to `yes` against an ASG with multiple instances causes it to ignore `replace_instances` and replace a random instance in the ASG. ##### STEPS TO REPRODUCE - Spin up an ASG with min, max and desired > 1 - Change the launch configuration for the ASG - Run `ec2_asg`, specifying a single instance for `replace_instances`, and `lc_check` = `yes` It will choose a random instance from the instances in the ASG which have the old LC. This seems to stem from [these lines](https://github.com/ansible/ansible-modules-core/blob/7314cc3867eb90bc1c098e29265ae48670ad35b1/cloud/amazon/ec2_asg.py#L628-L633) ignoring the passed-in `initial_instances` and instead producing its own list of instances to be terminated. ``` ec2_asg: lc_check: yes replace_batch_size: 1 replace_instances: my_instance_id name: my_asg min_size: 3 max_size: 3 desired_capacity: 3 launch_config_name: my_lc region: us-west-2 ``` ##### EXPECTED RESULTS It would spin up a new instance in the ASG, and then terminate the instance I specified above ##### ACTUAL RESULTS It sometimes terminates the one I specify, other times it terminates a different one. ``` TASK [Cycling | ec2_asg | Cycle instance (only if its launch configuration differs from that of the ASG)] *** task path: cycle-asg-instance-with-status-check.yml:16 Wednesday 15 June 2016 15:31:18 +0000 (0:00:00.020) 0:04:47.913 ******** ESTABLISH LOCAL CONNECTION FOR USER: admin 127.0.0.1 EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )"" ) 127.0.0.1 PUT /tmp/tmpTZaAaD TO /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg 127.0.0.1 EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg; rm -rf ""/home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/"" > /dev/null 2>&1 changed: [localhost] => {""availability_zones"": [""us-west-2c""], ""changed"": true, ""default_cooldown"": 300, ""desired_capacity"": 2, ""health_check_period"": 300, ""health_check_type"": ""EC2"", ""healthy_instances"": 3, ""in_service_instances"": 3, ""instance_facts"": {""i-01fca335e1e29c65c"": {""health_status"": ""Healthy"", ""launch_config_name"": ""terraform-5nlhqhrvt5e3taugrqzifth2su"", ""lifecycle_state"": ""InService""}, ""i-03ea5a0be5b5b92a5"" : {""health_status"": ""Healthy"", ""launch_config_name"": """", ""lifecycle_state"": ""InService""}, ""i-0ad0d81d719fe7bc1"": {""health_status"": ""Healthy"", ""launch_config_name"": null, ""lifecycle_state"": ""InService""}}, ""instances"": [""i-01fca335e1e29c65c"", ""i-03ea5a0be5b5b92a5"", ""i-0ad0d81d719fe7bc1""], ""invocation"": {""module_args"": {""availability_zones"": null, ""aws_access_key"": null, ""aws_secret_key "": null, ""default_cooldown"": 300, ""desired_capacity"": 2, ""ec2_url"": null, ""health_check_period"": 300, ""health_check_type"": ""EC2"", ""launch_config_name"": """", ""lc_check"": true, ""load_balancers"": null, ""max_size"": 2, ""min_size"": 2, ""name"": ""router-jljw-us-west-2c"", ""profile"": null, ""region"": ""us-west-2"", ""replace_all_instances"": false, ""replace_batch_size"": 1, ""replace_instances"": [""i-00 65d89d324fe72df""], ""security_token"": null, ""state"": ""present"", ""tags"": [], ""termination_policies"": [""Default""], ""validate_certs"": true, ""vpc_zone_identifier"": null, ""wait_for_instances"": true, ""wait_timeout"": 300}, ""module_name"": ""ec2_asg""}, ""launch_config_name"": """", ""load_balancers"":, ""max_size"": 2, ""min_size"": 2, ""name"": ""my_asg"", ""pending_instanc es"": 0, ""placement_group"": null, ""tags"": {""cleaner-destroy-after"": ""2016-06-14 16:49:17 +0000""}, ""terminating_instances"": 0, ""termination_policies"": [""Default""], ""unhealthy_instances"": 0, ""viable_instances"": 3, ""vpc_zone_identifier"": ""subnet-f453acac""} ``` ",True,"ec2_asg ignores replace_instances if lc_check is true - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/ec2_asg ##### ANSIBLE VERSION ``` ansible 2.0.0.1 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Running `ec2_asg` with `replace_instances` set to a single instance and `lc_check` set to `yes` against an ASG with multiple instances causes it to ignore `replace_instances` and replace a random instance in the ASG. ##### STEPS TO REPRODUCE - Spin up an ASG with min, max and desired > 1 - Change the launch configuration for the ASG - Run `ec2_asg`, specifying a single instance for `replace_instances`, and `lc_check` = `yes` It will choose a random instance from the instances in the ASG which have the old LC. This seems to stem from [these lines](https://github.com/ansible/ansible-modules-core/blob/7314cc3867eb90bc1c098e29265ae48670ad35b1/cloud/amazon/ec2_asg.py#L628-L633) ignoring the passed-in `initial_instances` and instead producing its own list of instances to be terminated. ``` ec2_asg: lc_check: yes replace_batch_size: 1 replace_instances: my_instance_id name: my_asg min_size: 3 max_size: 3 desired_capacity: 3 launch_config_name: my_lc region: us-west-2 ``` ##### EXPECTED RESULTS It would spin up a new instance in the ASG, and then terminate the instance I specified above ##### ACTUAL RESULTS It sometimes terminates the one I specify, other times it terminates a different one. ``` TASK [Cycling | ec2_asg | Cycle instance (only if its launch configuration differs from that of the ASG)] *** task path: cycle-asg-instance-with-status-check.yml:16 Wednesday 15 June 2016 15:31:18 +0000 (0:00:00.020) 0:04:47.913 ******** ESTABLISH LOCAL CONNECTION FOR USER: admin 127.0.0.1 EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )"" ) 127.0.0.1 PUT /tmp/tmpTZaAaD TO /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg 127.0.0.1 EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg; rm -rf ""/home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/"" > /dev/null 2>&1 changed: [localhost] => {""availability_zones"": [""us-west-2c""], ""changed"": true, ""default_cooldown"": 300, ""desired_capacity"": 2, ""health_check_period"": 300, ""health_check_type"": ""EC2"", ""healthy_instances"": 3, ""in_service_instances"": 3, ""instance_facts"": {""i-01fca335e1e29c65c"": {""health_status"": ""Healthy"", ""launch_config_name"": ""terraform-5nlhqhrvt5e3taugrqzifth2su"", ""lifecycle_state"": ""InService""}, ""i-03ea5a0be5b5b92a5"" : {""health_status"": ""Healthy"", ""launch_config_name"": """", ""lifecycle_state"": ""InService""}, ""i-0ad0d81d719fe7bc1"": {""health_status"": ""Healthy"", ""launch_config_name"": null, ""lifecycle_state"": ""InService""}}, ""instances"": [""i-01fca335e1e29c65c"", ""i-03ea5a0be5b5b92a5"", ""i-0ad0d81d719fe7bc1""], ""invocation"": {""module_args"": {""availability_zones"": null, ""aws_access_key"": null, ""aws_secret_key "": null, ""default_cooldown"": 300, ""desired_capacity"": 2, ""ec2_url"": null, ""health_check_period"": 300, ""health_check_type"": ""EC2"", ""launch_config_name"": """", ""lc_check"": true, ""load_balancers"": null, ""max_size"": 2, ""min_size"": 2, ""name"": ""router-jljw-us-west-2c"", ""profile"": null, ""region"": ""us-west-2"", ""replace_all_instances"": false, ""replace_batch_size"": 1, ""replace_instances"": [""i-00 65d89d324fe72df""], ""security_token"": null, ""state"": ""present"", ""tags"": [], ""termination_policies"": [""Default""], ""validate_certs"": true, ""vpc_zone_identifier"": null, ""wait_for_instances"": true, ""wait_timeout"": 300}, ""module_name"": ""ec2_asg""}, ""launch_config_name"": """", ""load_balancers"":, ""max_size"": 2, ""min_size"": 2, ""name"": ""my_asg"", ""pending_instanc es"": 0, ""placement_group"": null, ""tags"": {""cleaner-destroy-after"": ""2016-06-14 16:49:17 +0000""}, ""terminating_instances"": 0, ""termination_policies"": [""Default""], ""unhealthy_instances"": 0, ""viable_instances"": 3, ""vpc_zone_identifier"": ""subnet-f453acac""} ``` ",1, asg ignores replace instances if lc check is true issue type bug report component name cloud amazon asg ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary running asg with replace instances set to a single instance and lc check set to yes against an asg with multiple instances causes it to ignore replace instances and replace a random instance in the asg steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used spin up an asg with min max and desired change the launch configuration for the asg run asg specifying a single instance for replace instances and lc check yes it will choose a random instance from the instances in the asg which have the old lc this seems to stem from ignoring the passed in initial instances and instead producing its own list of instances to be terminated asg lc check yes replace batch size replace instances my instance id name my asg min size max size desired capacity launch config name my lc region us west expected results it would spin up a new instance in the asg and then terminate the instance i specified above actual results it sometimes terminates the one i specify other times it terminates a different one task task path cycle asg instance with status check yml wednesday june establish local connection for user admin exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp tmptzaaad to home admin ansible tmp ansible tmp asg exec lang en us utf lc all en us utf lc messages en us utf usr bin python home admin ansible tmp ansible tmp asg rm rf home admin ansible tmp ansible tmp dev null changed availability zones changed true default cooldown desired capacity health check period health check type healthy instances in service instances instance facts i health status healthy launch config name terraform lifecycle state inservice i health status healthy launch config name lifecycle state inservice i health status healthy launch config name null lifecycle state inservice instances invocation module args availability zones null aws access key null aws secret key null default cooldown desired capacity url null health check period health check type launch config name lc check true load balancers null max size min size name router jljw us west profile null region us west replace all instances false replace batch size replace instances i security token null state present tags termination policies validate certs true vpc zone identifier null wait for instances true wait timeout module name asg launch config name load balancers max size min size name my asg pending instanc es placement group null tags cleaner destroy after terminating instances termination policies unhealthy instances viable instances vpc zone identifier subnet ,1 1659,6574047879.0,IssuesEvent,2017-09-11 11:14:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure resource group creation fails,affects_2.2 azure bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_resourcegroup ##### ANSIBLE VERSION ansible 2.2.0.0 also tried with github pip install at this time ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ``` pip freeze | grep azure azure==2.0.0rc5 azure-batch==0.30.0rc5 azure-cli==0.1.0b10 azure-cli-acr==0.1.0b10 azure-cli-acs==0.1.0b10 azure-cli-appservice==0.1.0b10 azure-cli-cloud==0.1.0b10 azure-cli-component==0.1.0b10 azure-cli-configure==0.1.0b10 azure-cli-container==0.1.0b10 azure-cli-context==0.1.0b10 azure-cli-core==0.1.0b10 azure-cli-feedback==0.1.0b10 azure-cli-network==0.1.0b10 azure-cli-profile==0.1.0b10 azure-cli-resource==0.1.0b10 azure-cli-role==0.1.0b10 azure-cli-storage==0.1.0b10 azure-cli-vm==0.1.0b10 azure-common==1.1.4 azure-graphrbac==0.30.0rc6 azure-mgmt==0.30.0rc5 azure-mgmt-authorization==0.30.0rc6 azure-mgmt-batch==0.30.0rc5 azure-mgmt-cdn==0.30.0rc5 azure-mgmt-cognitiveservices==0.30.0rc5 azure-mgmt-commerce==0.30.0rc5 azure-mgmt-compute==0.32.1 azure-mgmt-containerregistry==0.1.0 azure-mgmt-dns==0.30.0rc6 azure-mgmt-keyvault==0.30.0rc5 azure-mgmt-logic==0.30.0rc5 azure-mgmt-network==0.30.0rc6 azure-mgmt-notificationhubs==0.30.0rc5 azure-mgmt-nspkg==1.0.0 azure-mgmt-powerbiembedded==0.30.0rc5 azure-mgmt-redis==0.30.0rc5 azure-mgmt-resource==0.30.2 azure-mgmt-scheduler==0.30.0rc5 azure-mgmt-storage==0.30.0rc6 azure-mgmt-trafficmanager==0.30.0rc6 azure-mgmt-web==0.30.1 azure-nspkg==1.0.0 azure-servicebus==0.20.2 azure-servicemanagement-legacy==0.20.3 azure-storage==0.33.0 msrestazure==0.4.5 ``` ##### SUMMARY Resource Group Creation fails due to 403 on existence check url ##### STEPS TO REPRODUCE ``` - name: Create a resource group azure_rm_resourcegroup: name: devtesting location: westus tags: testing: testing delete: never ``` ##### EXPECTED RESULTS Expected resource group to be created ##### ACTUAL RESULTS ``` fatal: [development-tools]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""ad_user"": null, ""append_tags"": true, ""client_id"": null, ""force"": false, ""location"": ""westus"", ""name"": ""devtesting"", ""password"": null, ""profile"": null, ""secret"": null, ""state"": ""present"", ""subscription_id"": null, ""tags"": { ""delete"": ""never"", ""testing"": ""testing"" }, ""tenant"": null }, ""module_name"": ""azure_rm_resourcegroup"" }, ""msg"": ""Error checking for existence of name devtesting - 403 Client Error: Forbidden for url: https://management.azure.com/subscriptions/MYSUBID/resourcegroups/devtesting?api-version=2016-09-01"" } ``` ``` ",True,"azure resource group creation fails - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_resourcegroup ##### ANSIBLE VERSION ansible 2.2.0.0 also tried with github pip install at this time ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ``` pip freeze | grep azure azure==2.0.0rc5 azure-batch==0.30.0rc5 azure-cli==0.1.0b10 azure-cli-acr==0.1.0b10 azure-cli-acs==0.1.0b10 azure-cli-appservice==0.1.0b10 azure-cli-cloud==0.1.0b10 azure-cli-component==0.1.0b10 azure-cli-configure==0.1.0b10 azure-cli-container==0.1.0b10 azure-cli-context==0.1.0b10 azure-cli-core==0.1.0b10 azure-cli-feedback==0.1.0b10 azure-cli-network==0.1.0b10 azure-cli-profile==0.1.0b10 azure-cli-resource==0.1.0b10 azure-cli-role==0.1.0b10 azure-cli-storage==0.1.0b10 azure-cli-vm==0.1.0b10 azure-common==1.1.4 azure-graphrbac==0.30.0rc6 azure-mgmt==0.30.0rc5 azure-mgmt-authorization==0.30.0rc6 azure-mgmt-batch==0.30.0rc5 azure-mgmt-cdn==0.30.0rc5 azure-mgmt-cognitiveservices==0.30.0rc5 azure-mgmt-commerce==0.30.0rc5 azure-mgmt-compute==0.32.1 azure-mgmt-containerregistry==0.1.0 azure-mgmt-dns==0.30.0rc6 azure-mgmt-keyvault==0.30.0rc5 azure-mgmt-logic==0.30.0rc5 azure-mgmt-network==0.30.0rc6 azure-mgmt-notificationhubs==0.30.0rc5 azure-mgmt-nspkg==1.0.0 azure-mgmt-powerbiembedded==0.30.0rc5 azure-mgmt-redis==0.30.0rc5 azure-mgmt-resource==0.30.2 azure-mgmt-scheduler==0.30.0rc5 azure-mgmt-storage==0.30.0rc6 azure-mgmt-trafficmanager==0.30.0rc6 azure-mgmt-web==0.30.1 azure-nspkg==1.0.0 azure-servicebus==0.20.2 azure-servicemanagement-legacy==0.20.3 azure-storage==0.33.0 msrestazure==0.4.5 ``` ##### SUMMARY Resource Group Creation fails due to 403 on existence check url ##### STEPS TO REPRODUCE ``` - name: Create a resource group azure_rm_resourcegroup: name: devtesting location: westus tags: testing: testing delete: never ``` ##### EXPECTED RESULTS Expected resource group to be created ##### ACTUAL RESULTS ``` fatal: [development-tools]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""ad_user"": null, ""append_tags"": true, ""client_id"": null, ""force"": false, ""location"": ""westus"", ""name"": ""devtesting"", ""password"": null, ""profile"": null, ""secret"": null, ""state"": ""present"", ""subscription_id"": null, ""tags"": { ""delete"": ""never"", ""testing"": ""testing"" }, ""tenant"": null }, ""module_name"": ""azure_rm_resourcegroup"" }, ""msg"": ""Error checking for existence of name devtesting - 403 Client Error: Forbidden for url: https://management.azure.com/subscriptions/MYSUBID/resourcegroups/devtesting?api-version=2016-09-01"" } ``` ``` ",1,azure resource group creation fails issue type bug report component name azure rm resourcegroup ansible version ansible also tried with github pip install at this time configuration os environment linux pip freeze grep azure azure azure batch azure cli azure cli acr azure cli acs azure cli appservice azure cli cloud azure cli component azure cli configure azure cli container azure cli context azure cli core azure cli feedback azure cli network azure cli profile azure cli resource azure cli role azure cli storage azure cli vm azure common azure graphrbac azure mgmt azure mgmt authorization azure mgmt batch azure mgmt cdn azure mgmt cognitiveservices azure mgmt commerce azure mgmt compute azure mgmt containerregistry azure mgmt dns azure mgmt keyvault azure mgmt logic azure mgmt network azure mgmt notificationhubs azure mgmt nspkg azure mgmt powerbiembedded azure mgmt redis azure mgmt resource azure mgmt scheduler azure mgmt storage azure mgmt trafficmanager azure mgmt web azure nspkg azure servicebus azure servicemanagement legacy azure storage msrestazure summary resource group creation fails due to on existence check url steps to reproduce name create a resource group azure rm resourcegroup name devtesting location westus tags testing testing delete never expected results expected resource group to be created actual results fatal failed changed false failed true invocation module args ad user null append tags true client id null force false location westus name devtesting password null profile null secret null state present subscription id null tags delete never testing testing tenant null module name azure rm resourcegroup msg error checking for existence of name devtesting client error forbidden for url ,1 1791,6575887132.0,IssuesEvent,2017-09-11 17:42:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt force with deb doesn't force,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/jenkins/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY Trying to force install a deb file doesn't (always) work. ##### STEPS TO REPRODUCE Custom built nginx deb and install it. Then change some of the included modules, and create another custom deb for that nginx. Now try install it via ansible and force. ``` - name: install nginx pagespeed apt: > deb={{ nginx_pagespeed_deb_url }} dpkg_options='force-confold,force-confdef' force=yes ``` I suspect that because the deb has the same header (and version) as the existing deb ansible is refusing to say changed. The contents of the deb has changed with regards to what modules were compiled in, but the version is the same. ##### EXPECTED RESULTS Expected deb to be installed as if I did `dpkg -i file.deb` And to see ansible emit the changed message. Changed. ##### ACTUAL RESULTS ``` ok: [host2.example.org] => {""changed"": false, ""diff"": """", ""invocation"": {""module_args"": {""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": null, ""deb"": ""/tmp/ansible_mj0640/nginx_1.11.1-1_amd64.deb"", ""default_release"": null, ""dpkg_options"": ""force-confold,force-confdef"", ""force"": true, ""install_recommends"": null, ""only_upgrade"": false, ""package"": null, ""purge"": false, ""state"": ""present"", ""update_cache"": false, ""upgrade"": null}, ""module_name"": ""apt""}, ""stderr"": """", ""stdout"": """", ""stdout_lines"": []} ``` ",True,"apt force with deb doesn't force - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/jenkins/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY Trying to force install a deb file doesn't (always) work. ##### STEPS TO REPRODUCE Custom built nginx deb and install it. Then change some of the included modules, and create another custom deb for that nginx. Now try install it via ansible and force. ``` - name: install nginx pagespeed apt: > deb={{ nginx_pagespeed_deb_url }} dpkg_options='force-confold,force-confdef' force=yes ``` I suspect that because the deb has the same header (and version) as the existing deb ansible is refusing to say changed. The contents of the deb has changed with regards to what modules were compiled in, but the version is the same. ##### EXPECTED RESULTS Expected deb to be installed as if I did `dpkg -i file.deb` And to see ansible emit the changed message. Changed. ##### ACTUAL RESULTS ``` ok: [host2.example.org] => {""changed"": false, ""diff"": """", ""invocation"": {""module_args"": {""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": null, ""deb"": ""/tmp/ansible_mj0640/nginx_1.11.1-1_amd64.deb"", ""default_release"": null, ""dpkg_options"": ""force-confold,force-confdef"", ""force"": true, ""install_recommends"": null, ""only_upgrade"": false, ""package"": null, ""purge"": false, ""state"": ""present"", ""update_cache"": false, ""upgrade"": null}, ""module_name"": ""apt""}, ""stderr"": """", ""stdout"": """", ""stdout_lines"": []} ``` ",1,apt force with deb doesn t force issue type bug report component name apt ansible version ansible config file home jenkins ansible ansible cfg configured module search path default w o overrides configuration os environment ubuntu summary trying to force install a deb file doesn t always work steps to reproduce custom built nginx deb and install it then change some of the included modules and create another custom deb for that nginx now try install it via ansible and force name install nginx pagespeed apt deb nginx pagespeed deb url dpkg options force confold force confdef force yes i suspect that because the deb has the same header and version as the existing deb ansible is refusing to say changed the contents of the deb has changed with regards to what modules were compiled in but the version is the same expected results expected deb to be installed as if i did dpkg i file deb and to see ansible emit the changed message changed actual results ok changed false diff invocation module args allow unauthenticated false autoremove false cache valid time null deb tmp ansible nginx deb default release null dpkg options force confold force confdef force true install recommends null only upgrade false package null purge false state present update cache false upgrade null module name apt stderr stdout stdout lines ,1 1726,6574652744.0,IssuesEvent,2017-09-11 13:39:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,VLAN idempotency breaks with special case in eos_config,affects_2.3 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_config ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /home/vagrant/iostest/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Using default config ##### OS / ENVIRONMENT Ubuntu managing Arista ##### SUMMARY In this special case when multiple VLANS are created without a name (or the same name) associated to them, the ""show run"" of an Arista config puts them in one line such as ""vlan 10-11"". This breaks the idempotency since it will attempt to add the VLAN again. It will attempt to add VLAN 10 or 11 again, for example. It appears as though during the running config check, the module is parsing for an individual VLAN line with that particualr VLAN. ##### STEPS TO REPRODUCE 1) Create a task that adds a NEW vlan to an arista switch. Do not add a name to this vlan at this time. 2) Run the task again but change make vlan 11 instead of vlan 10 3) Run the exact task in step 1...adding vlan 10 to the switch again ``` #eos.yaml - name: role add vlan using eos_config module connection: local eos_config: lines: - vlan 10 provider: ""{{ cli }}"" register: vlan_created_out #arista-vlan.yaml --- - name: playbook - vlan add using eos_config hosts: eos gather_facts: no connection: local vars_files: - creds.yaml roles: - { role: vlan_add } ``` ##### EXPECTED RESULTS Step 1) Expected output, changed=1 and new vlan 10 created Step 2) Expected output, changed=1 and new vlan 11 created Step 3) Expected output, changed=0 and vlan 10 not attempted to be created ##### ACTUAL RESULTS Step 1) Expected output, changed=1 and new vlan 10 created Step 2) Expected output, changed=1 and new vlan 11 created Step 3) Expected output, changed=1 and vlan 10 attempted to be created again...you can see when logging AAA commands that the vlan is attempting to be created every time. ``` ``` ",True,"VLAN idempotency breaks with special case in eos_config - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_config ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /home/vagrant/iostest/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Using default config ##### OS / ENVIRONMENT Ubuntu managing Arista ##### SUMMARY In this special case when multiple VLANS are created without a name (or the same name) associated to them, the ""show run"" of an Arista config puts them in one line such as ""vlan 10-11"". This breaks the idempotency since it will attempt to add the VLAN again. It will attempt to add VLAN 10 or 11 again, for example. It appears as though during the running config check, the module is parsing for an individual VLAN line with that particualr VLAN. ##### STEPS TO REPRODUCE 1) Create a task that adds a NEW vlan to an arista switch. Do not add a name to this vlan at this time. 2) Run the task again but change make vlan 11 instead of vlan 10 3) Run the exact task in step 1...adding vlan 10 to the switch again ``` #eos.yaml - name: role add vlan using eos_config module connection: local eos_config: lines: - vlan 10 provider: ""{{ cli }}"" register: vlan_created_out #arista-vlan.yaml --- - name: playbook - vlan add using eos_config hosts: eos gather_facts: no connection: local vars_files: - creds.yaml roles: - { role: vlan_add } ``` ##### EXPECTED RESULTS Step 1) Expected output, changed=1 and new vlan 10 created Step 2) Expected output, changed=1 and new vlan 11 created Step 3) Expected output, changed=0 and vlan 10 not attempted to be created ##### ACTUAL RESULTS Step 1) Expected output, changed=1 and new vlan 10 created Step 2) Expected output, changed=1 and new vlan 11 created Step 3) Expected output, changed=1 and vlan 10 attempted to be created again...you can see when logging AAA commands that the vlan is attempting to be created every time. ``` ``` ",1,vlan idempotency breaks with special case in eos config issue type bug report component name eos config ansible version ansible config file home vagrant iostest ansible cfg configured module search path default w o overrides configuration using default config os environment ubuntu managing arista summary in this special case when multiple vlans are created without a name or the same name associated to them the show run of an arista config puts them in one line such as vlan this breaks the idempotency since it will attempt to add the vlan again it will attempt to add vlan or again for example it appears as though during the running config check the module is parsing for an individual vlan line with that particualr vlan steps to reproduce create a task that adds a new vlan to an arista switch do not add a name to this vlan at this time run the task again but change make vlan instead of vlan run the exact task in step adding vlan to the switch again eos yaml name role add vlan using eos config module connection local eos config lines vlan provider cli register vlan created out arista vlan yaml name playbook vlan add using eos config hosts eos gather facts no connection local vars files creds yaml roles role vlan add expected results step expected output changed and new vlan created step expected output changed and new vlan created step expected output changed and vlan not attempted to be created actual results step expected output changed and new vlan created step expected output changed and new vlan created step expected output changed and vlan attempted to be created again you can see when logging aaa commands that the vlan is attempting to be created every time ,1 1208,5162389391.0,IssuesEvent,2017-01-17 00:24:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,iam_cert doesn't return the ARN value after creating a certificate,affects_2.0 aws bug_report cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Ansible Version: ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Environment: Ubuntu 14.04.4 LTS ##### Summary: I'm using the `iam_cert` module to create a certificate to use with ELB. I've registered a variable with the results of running that module and dumped it. There's no ARN field that I need to feed to the `ec2_elb_lb` module. ##### Steps To Reproduce: ``` - name: Upload SSL certificate iam_cert: name: ssl state: present cert: cert.pem key: cert.key register: ssl_certificate - debug: var=ssl_certificate ``` ##### Expected Results: I expect the ARN field in the returned result. AWS CLI gives that information: ``` $ aws iam list-server-certificates { ""ServerCertificateMetadataList"": [ { ""ServerCertificateId"": ""AS...SU"", ""ServerCertificateName"": ""ssl"", ""Expiration"": ""2025-11-16T12:52:40Z"", ""Path"": ""/"", ""Arn"": ""arn:aws:iam::4...9:server-certificate/ssl"", ""UploadDate"": ""2016-03-03T21:46:09Z"" } ] } ``` So the info is there, but Ansible doesn't use it. ##### Actual Results: ``` ok: [localhost] => { ""ssl_certificate"": { ""cert_body"": ""-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----"", ""cert_path"": ""/"", ""changed"": false, ""expiration_date"": ""2025-11-16T12:52:40Z"", ""msg"": ""No new path or name specified. No changes made"", ""name"": ""ssl"", ""upload_date"": ""2016-03-03T21:46:09Z"" } } ``` ",True,"iam_cert doesn't return the ARN value after creating a certificate - ##### Issue Type: - Bug Report ##### Ansible Version: ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Environment: Ubuntu 14.04.4 LTS ##### Summary: I'm using the `iam_cert` module to create a certificate to use with ELB. I've registered a variable with the results of running that module and dumped it. There's no ARN field that I need to feed to the `ec2_elb_lb` module. ##### Steps To Reproduce: ``` - name: Upload SSL certificate iam_cert: name: ssl state: present cert: cert.pem key: cert.key register: ssl_certificate - debug: var=ssl_certificate ``` ##### Expected Results: I expect the ARN field in the returned result. AWS CLI gives that information: ``` $ aws iam list-server-certificates { ""ServerCertificateMetadataList"": [ { ""ServerCertificateId"": ""AS...SU"", ""ServerCertificateName"": ""ssl"", ""Expiration"": ""2025-11-16T12:52:40Z"", ""Path"": ""/"", ""Arn"": ""arn:aws:iam::4...9:server-certificate/ssl"", ""UploadDate"": ""2016-03-03T21:46:09Z"" } ] } ``` So the info is there, but Ansible doesn't use it. ##### Actual Results: ``` ok: [localhost] => { ""ssl_certificate"": { ""cert_body"": ""-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----"", ""cert_path"": ""/"", ""changed"": false, ""expiration_date"": ""2025-11-16T12:52:40Z"", ""msg"": ""No new path or name specified. No changes made"", ""name"": ""ssl"", ""upload_date"": ""2016-03-03T21:46:09Z"" } } ``` ",1,iam cert doesn t return the arn value after creating a certificate issue type bug report ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides environment ubuntu lts summary i m using the iam cert module to create a certificate to use with elb i ve registered a variable with the results of running that module and dumped it there s no arn field that i need to feed to the elb lb module steps to reproduce name upload ssl certificate iam cert name ssl state present cert cert pem key cert key register ssl certificate debug var ssl certificate expected results i expect the arn field in the returned result aws cli gives that information aws iam list server certificates servercertificatemetadatalist servercertificateid as su servercertificatename ssl expiration path arn arn aws iam server certificate ssl uploaddate so the info is there but ansible doesn t use it actual results ok ssl certificate cert body begin certificate n n end certificate cert path changed false expiration date msg no new path or name specified no changes made name ssl upload date ,1 1693,6574204082.0,IssuesEvent,2017-09-11 11:57:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Broken pipe; Sharing connection closed.,affects_2.2 bug_report waiting_on_maintainer,"Sorry if there are similar issues but I think I have a specific case. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME command module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = ../my_path/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` host_key_checking = False ``` ##### OS / ENVIRONMENT Mac OS 10.11.6 OpenSSH_6.9p1, LibreSSL 2.1.8 ##### SUMMARY This happens sporadically. It appears to happen once and then on subsequent runs the error disappears. After waiting for some time running the play again will trigger the same error and re-running it the error no longer appears. ##### STEPS TO REPRODUCE Running a command task that runs for 4 hosts and the task is delegated to a single host (adding peers to a cluster master). ##### EXPECTED RESULTS Expected SSH connection and all well. ##### ACTUAL RESULTS Fails to connect to delegated host. ``` ""msg"": ""Failed to connect to the host via ssh: OpenSSH_6.9p1, LibreSSL 2.1.8\r\ndebug1: Reading configuration data /Users/jonnymcc/.ssh/config\r\ndebug1: /Users/jonnymcc/.ssh/config line 30: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 17898\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 8\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Control master terminated unexpectedly\r\n"", ""unreachable"": true ``` ",True,"Broken pipe; Sharing connection closed. - Sorry if there are similar issues but I think I have a specific case. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME command module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = ../my_path/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` host_key_checking = False ``` ##### OS / ENVIRONMENT Mac OS 10.11.6 OpenSSH_6.9p1, LibreSSL 2.1.8 ##### SUMMARY This happens sporadically. It appears to happen once and then on subsequent runs the error disappears. After waiting for some time running the play again will trigger the same error and re-running it the error no longer appears. ##### STEPS TO REPRODUCE Running a command task that runs for 4 hosts and the task is delegated to a single host (adding peers to a cluster master). ##### EXPECTED RESULTS Expected SSH connection and all well. ##### ACTUAL RESULTS Fails to connect to delegated host. ``` ""msg"": ""Failed to connect to the host via ssh: OpenSSH_6.9p1, LibreSSL 2.1.8\r\ndebug1: Reading configuration data /Users/jonnymcc/.ssh/config\r\ndebug1: /Users/jonnymcc/.ssh/config line 30: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 17898\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 8\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Control master terminated unexpectedly\r\n"", ""unreachable"": true ``` ",1,broken pipe sharing connection closed sorry if there are similar issues but i think i have a specific case issue type bug report component name command module ansible version ansible config file my path ansible cfg configured module search path default w o overrides configuration host key checking false os environment mac os openssh libressl summary this happens sporadically it appears to happen once and then on subsequent runs the error disappears after waiting for some time running the play again will trigger the same error and re running it the error no longer appears steps to reproduce running a command task that runs for hosts and the task is delegated to a single host adding peers to a cluster master expected results expected ssh connection and all well actual results fails to connect to delegated host msg failed to connect to the host via ssh openssh libressl r reading configuration data users jonnymcc ssh config r users jonnymcc ssh config line applying options for r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r control master terminated unexpectedly r n unreachable true ,1 1763,6575013535.0,IssuesEvent,2017-09-11 14:46:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Cannot pass args to docker container,affects_2.1 cloud docker feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Sometimes there is a need to pass additional arguments when invoking `docker run`. Currently, there is no such option, so it led us to running docker containers from shell. ##### STEPS TO REPRODUCE ``` - docker_container: name: consul image: progrium/consul args: -server ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"Cannot pass args to docker container - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Sometimes there is a need to pass additional arguments when invoking `docker run`. Currently, there is no such option, so it led us to running docker containers from shell. ##### STEPS TO REPRODUCE ``` - docker_container: name: consul image: progrium/consul args: -server ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,cannot pass args to docker container issue type feature idea component name docker container ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary sometimes there is a need to pass additional arguments when invoking docker run currently there is no such option so it led us to running docker containers from shell steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used docker container name consul image progrium consul args server expected results actual results ,1 1747,6574941788.0,IssuesEvent,2017-09-11 14:33:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,vsphere_guest: index out of range exception while reconfiguring disk size,affects_2.1 bug_report cloud vmware waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_module_vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Centos7 ##### SUMMARY got index out of range exception while configuring vm with vsphere_guest ##### STEPS TO REPRODUCE ``` --- - hosts: localhost gather_facts: false connection: local roles: - vm_create ... --- # tasks file for vm_create - name: check for dependency python-pip yum: name=""{{item}}"" state=latest with_items: - python-pip - name: check for dependencies pip: name=""{{item}}"" state=latest with_items: - pysphere - pyvmomi - name: create vm from template vsphere_guest: vcenter_hostname: ""{{vcenter_hostname}}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" from_template: yes template_src: ""{{ vm_template }}"" cluster: ""{{ cluster }}"" resource_pool: ""{{ resource_pool }}"" power_on_after_clone: ""no"" tags: - create - name: reconfigure vm vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" state: reconfigured vm_extra_config: notes: ""created with ansible vsphere"" vm_disk: disk1: size_gb: ""{{ disk_main }}"" type: thin datastore: ""{{ datastore }}"" disk2: size_gb: ""{{ disk_var }}"" type: thin datastore: ""{{ datastore }}"" disk3: size_gb: ""{{ disk_opt }}"" type: thin datastore: ""{{ datastore }}"" disk4: size_gb: ""{{ disk_home }}"" type: thin datastore: ""{{ datastore }}"" vm_nic: nic1: type: ""vmxnet3"" network: ""VM Network"" network_type: ""standard"" vm_hardware: memory_mb: ""{{ memory }}"" num_cpus: ""{{ cpucount }}"" osid: ""{{ osid }}"" scsi: paravirtual esxi: datacenter: ""{{ datacenter }}"" hostname: ""{{ esxi_host }}"" ... ``` ##### EXPECTED RESULTS normal playthrough with reconfigured disk-sizes ##### ACTUAL RESULTS creating vm from template works fine, but reconfiguring fails with exception ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1879, in main() File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1806, in main force=force File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 842, in reconfigure_vm module, vm_disk, changes) File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 773, in update_disks hdd_id = vm._devices[dev_key]['label'].split()[2] IndexError: list index out of range ``` ",True,"vsphere_guest: index out of range exception while reconfiguring disk size - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_module_vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Centos7 ##### SUMMARY got index out of range exception while configuring vm with vsphere_guest ##### STEPS TO REPRODUCE ``` --- - hosts: localhost gather_facts: false connection: local roles: - vm_create ... --- # tasks file for vm_create - name: check for dependency python-pip yum: name=""{{item}}"" state=latest with_items: - python-pip - name: check for dependencies pip: name=""{{item}}"" state=latest with_items: - pysphere - pyvmomi - name: create vm from template vsphere_guest: vcenter_hostname: ""{{vcenter_hostname}}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" from_template: yes template_src: ""{{ vm_template }}"" cluster: ""{{ cluster }}"" resource_pool: ""{{ resource_pool }}"" power_on_after_clone: ""no"" tags: - create - name: reconfigure vm vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" state: reconfigured vm_extra_config: notes: ""created with ansible vsphere"" vm_disk: disk1: size_gb: ""{{ disk_main }}"" type: thin datastore: ""{{ datastore }}"" disk2: size_gb: ""{{ disk_var }}"" type: thin datastore: ""{{ datastore }}"" disk3: size_gb: ""{{ disk_opt }}"" type: thin datastore: ""{{ datastore }}"" disk4: size_gb: ""{{ disk_home }}"" type: thin datastore: ""{{ datastore }}"" vm_nic: nic1: type: ""vmxnet3"" network: ""VM Network"" network_type: ""standard"" vm_hardware: memory_mb: ""{{ memory }}"" num_cpus: ""{{ cpucount }}"" osid: ""{{ osid }}"" scsi: paravirtual esxi: datacenter: ""{{ datacenter }}"" hostname: ""{{ esxi_host }}"" ... ``` ##### EXPECTED RESULTS normal playthrough with reconfigured disk-sizes ##### ACTUAL RESULTS creating vm from template works fine, but reconfiguring fails with exception ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1879, in main() File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1806, in main force=force File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 842, in reconfigure_vm module, vm_disk, changes) File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 773, in update_disks hdd_id = vm._devices[dev_key]['label'].split()[2] IndexError: list index out of range ``` ",1,vsphere guest index out of range exception while reconfiguring disk size issue type bug report component name ansible module vsphere guest ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment summary got index out of range exception while configuring vm with vsphere guest steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost gather facts false connection local roles vm create tasks file for vm create name check for dependency python pip yum name item state latest with items python pip name check for dependencies pip name item state latest with items pysphere pyvmomi name create vm from template vsphere guest vcenter hostname vcenter hostname username vcenter user password vcenter pass guest test from template yes template src vm template cluster cluster resource pool resource pool power on after clone no tags create name reconfigure vm vsphere guest vcenter hostname vcenter hostname username vcenter user password vcenter pass guest test state reconfigured vm extra config notes created with ansible vsphere vm disk size gb disk main type thin datastore datastore size gb disk var type thin datastore datastore size gb disk opt type thin datastore datastore size gb disk home type thin datastore datastore vm nic type network vm network network type standard vm hardware memory mb memory num cpus cpucount osid osid scsi paravirtual esxi datacenter datacenter hostname esxi host expected results normal playthrough with reconfigured disk sizes actual results creating vm from template works fine but reconfiguring fails with exception an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible bsledg ansible module vsphere guest py line in main file tmp ansible bsledg ansible module vsphere guest py line in main force force file tmp ansible bsledg ansible module vsphere guest py line in reconfigure vm module vm disk changes file tmp ansible bsledg ansible module vsphere guest py line in update disks hdd id vm devices split indexerror list index out of range ,1 1645,6572668802.0,IssuesEvent,2017-09-11 04:15:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"apt module: Pass ""--no-download"" to apt-get",affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu nodes ##### SUMMARY Would it be possible to allow the ""--no-download"" option to be passed to apt-get? ##### STEPS TO REPRODUCE For applications where the nodes do not have access to the Internet, i.e. large commercial, it would be useful to download the deb packages first, copy them to /var/apt/cache on each node and then run apt-get install --no-download. Creating an internal Ubuntu mirror would be prohibitive because of the file transfer required (several hundreds of GB). This can currently be accomplished using a shell command, but would be more elegant using the apt module. ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",True,"apt module: Pass ""--no-download"" to apt-get - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu nodes ##### SUMMARY Would it be possible to allow the ""--no-download"" option to be passed to apt-get? ##### STEPS TO REPRODUCE For applications where the nodes do not have access to the Internet, i.e. large commercial, it would be useful to download the deb packages first, copy them to /var/apt/cache on each node and then run apt-get install --no-download. Creating an internal Ubuntu mirror would be prohibitive because of the file transfer required (several hundreds of GB). This can currently be accomplished using a shell command, but would be more elegant using the apt module. ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",1,apt module pass no download to apt get issue type feature idea component name apt ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment ubuntu nodes summary would it be possible to allow the no download option to be passed to apt get steps to reproduce for applications where the nodes do not have access to the internet i e large commercial it would be useful to download the deb packages first copy them to var apt cache on each node and then run apt get install no download creating an internal ubuntu mirror would be prohibitive because of the file transfer required several hundreds of gb this can currently be accomplished using a shell command but would be more elegant using the apt module expected results actual results ,1 747,4351158673.0,IssuesEvent,2016-07-31 17:59:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,fstab parameter of mount module won't work,bug_report docs_report waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = /usr/share/ansible ##### Ansible Configuration: ##### Environment: Ubuntu 14.04 64Bit ##### Summary: set fstab=**/tmp/fstab** in mount module result in 'can't find mount point in **/etc/fstab**' ##### Steps To Reproduce: ``` ./test-module -m ../lib/ansible/modules/core/system/mount.py -a ""src=/dev/sda9 name=/tmp/mnt fstype=ext3 state=mounted **fstab=/tmp/fstab**"" ``` ##### Expected Results: ``` changed: ok ``` and a line in /tmp/fstab and 'mount -a' should show the mounted device ##### Actual Results: ``` *********************************** PARSED OUTPUT { ""failed"": true, ""invocation"": { ""module_args"": { ""dump"": null, ""fstab"": **""/tmp/fstab""**, ""fstype"": ""ext3"", ""name"": ""/tmp/mnt"", ""opts"": null, ""passno"": null, ""src"": ""/dev/sda9"", ""state"": ""mounted"" } }, ""msg"": ""Error mounting /tmp/mnt: mount: /tmp/mnt konnte nicht in **/etc/fstab oder /etc/mtab** gefunden werden\n"" } ``` ",True,"fstab parameter of mount module won't work - ##### Issue Type: - Bug Report ##### Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = /usr/share/ansible ##### Ansible Configuration: ##### Environment: Ubuntu 14.04 64Bit ##### Summary: set fstab=**/tmp/fstab** in mount module result in 'can't find mount point in **/etc/fstab**' ##### Steps To Reproduce: ``` ./test-module -m ../lib/ansible/modules/core/system/mount.py -a ""src=/dev/sda9 name=/tmp/mnt fstype=ext3 state=mounted **fstab=/tmp/fstab**"" ``` ##### Expected Results: ``` changed: ok ``` and a line in /tmp/fstab and 'mount -a' should show the mounted device ##### Actual Results: ``` *********************************** PARSED OUTPUT { ""failed"": true, ""invocation"": { ""module_args"": { ""dump"": null, ""fstab"": **""/tmp/fstab""**, ""fstype"": ""ext3"", ""name"": ""/tmp/mnt"", ""opts"": null, ""passno"": null, ""src"": ""/dev/sda9"", ""state"": ""mounted"" } }, ""msg"": ""Error mounting /tmp/mnt: mount: /tmp/mnt konnte nicht in **/etc/fstab oder /etc/mtab** gefunden werden\n"" } ``` ",1,fstab parameter of mount module won t work issue type bug report ansible version ansible config file etc ansible ansible cfg configured module search path usr share ansible ansible configuration environment ubuntu summary set fstab tmp fstab in mount module result in can t find mount point in etc fstab steps to reproduce test module m lib ansible modules core system mount py a src dev name tmp mnt fstype state mounted fstab tmp fstab expected results changed ok and a line in tmp fstab and mount a should show the mounted device actual results parsed output failed true invocation module args dump null fstab tmp fstab fstype name tmp mnt opts null passno null src dev state mounted msg error mounting tmp mnt mount tmp mnt konnte nicht in etc fstab oder etc mtab gefunden werden n ,1 1143,5000406985.0,IssuesEvent,2016-12-10 09:26:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git always pulls from master instead from tracking branch,affects_1.9 bug_report waiting_on_maintainer,"Issue Type: Bug report Ansible Version: 1.9.2 Ansible Configuration: ? Environment: Gentoo Linux 64bit Summary: remote checkout tracks branch develop but git module always updates from master Steps To Reproduce: - Clone remote repo onto branch != master, with master/HEAD != branch/HEAD - .... -m git -a ""repo=git://..../foo.git dest=.../foo update=yes"" - check repo status Expected Results: pull from tracking branch Actual Results: On remote: ``` git branch -av * develop dc66049 Basic Celery support remotes/origin/HEAD -> origin/master remotes/origin/develop dc66049 Basic Celery support remotes/origin/master 049a7c5 Merge branch 'release/0.4.2' ``` After running `ansible foo-staging -m git -a ""repo=git://.../foo.git dest=../foo update=yes""` ``` # git branch -av * develop 049a7c5 [behind 4] Merge branch 'release/0.4.2' remotes/origin/HEAD -> origin/master remotes/origin/develop dc66049 Basic Celery support remotes/origin/master 049a7c5 Merge branch 'release/0.4.2' ``` ",True,"git always pulls from master instead from tracking branch - Issue Type: Bug report Ansible Version: 1.9.2 Ansible Configuration: ? Environment: Gentoo Linux 64bit Summary: remote checkout tracks branch develop but git module always updates from master Steps To Reproduce: - Clone remote repo onto branch != master, with master/HEAD != branch/HEAD - .... -m git -a ""repo=git://..../foo.git dest=.../foo update=yes"" - check repo status Expected Results: pull from tracking branch Actual Results: On remote: ``` git branch -av * develop dc66049 Basic Celery support remotes/origin/HEAD -> origin/master remotes/origin/develop dc66049 Basic Celery support remotes/origin/master 049a7c5 Merge branch 'release/0.4.2' ``` After running `ansible foo-staging -m git -a ""repo=git://.../foo.git dest=../foo update=yes""` ``` # git branch -av * develop 049a7c5 [behind 4] Merge branch 'release/0.4.2' remotes/origin/HEAD -> origin/master remotes/origin/develop dc66049 Basic Celery support remotes/origin/master 049a7c5 Merge branch 'release/0.4.2' ``` ",1,git always pulls from master instead from tracking branch issue type bug report ansible version ansible configuration environment gentoo linux summary remote checkout tracks branch develop but git module always updates from master steps to reproduce clone remote repo onto branch master with master head branch head m git a repo git foo git dest foo update yes check repo status expected results pull from tracking branch actual results on remote git branch av develop basic celery support remotes origin head origin master remotes origin develop basic celery support remotes origin master merge branch release after running ansible foo staging m git a repo git foo git dest foo update yes git branch av develop merge branch release remotes origin head origin master remotes origin develop basic celery support remotes origin master merge branch release ,1 1125,4995839253.0,IssuesEvent,2016-12-09 11:36:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"nxos_config, rollbacked on abnormal checkpoint created by Ansible",affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/users/kpersonnic/projects/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Hi, While writing a new role for nexus switches dedicated to radius configuration I executed a ""non working"" version of my role. As expected Ansible rollbacked the change when it encountered an error. But the ckeckpoint created by Ansible contained unexpected commands lines which unconfigured two ports-profils used by several network port of the switch. How Ansible can create a checkpoint with ""random"" content ? ##### STEPS TO REPRODUCE I don't have the command history anymore in my term and I don't have spare N7K to reproduce the bug. But I think It might append with any failed commands which result in a rollback from the checkpoint created at the beginning of the run. ``` ansible-playbook playbooks/config_common.yml -k ``` ##### EXPECTED RESULTS Expected content of the checkpoint : ``` [NEXUS7K]# show rollback log exec Operation : Rollback to Checkpoint Checkpoint name : ansible_1479404111 Rollback done By : kpersonnic Rollback mode : atomic Verbose : disabled Start Time : Thu, 17:35:12 17 Nov 2016 End Time : Thu, 17:35:32 17 Nov 2016 Rollback Status : Success ... Executing Patch: ---------------- `conf t` `no aaa group server radius RAD_10.10.50.9` ``` ##### ACTUAL RESULTS Actual content of the checkpoint executed by ansible ``` [NEXUS7K]# show rollback log exec Operation : Rollback to Checkpoint Checkpoint name : ansible_1479404111 Rollback done By : kpersonnic Rollback mode : atomic Verbose : disabled Start Time : Thu, 17:35:12 17 Nov 2016 End Time : Thu, 17:35:32 17 Nov 2016 Rollback Status : Success ... Executing Patch: ---------------- `conf t` `port-profile type ethernet FRONT_ESX_ETH1-ETH3` `no switchport mode` `exit` `port-profile type ethernet FRONT_ESX_ETH0-ETH2` `no switchport mode` `exit` `no aaa group server radius RAD_10.10.50.9` ``` ",True,"nxos_config, rollbacked on abnormal checkpoint created by Ansible - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/users/kpersonnic/projects/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Hi, While writing a new role for nexus switches dedicated to radius configuration I executed a ""non working"" version of my role. As expected Ansible rollbacked the change when it encountered an error. But the ckeckpoint created by Ansible contained unexpected commands lines which unconfigured two ports-profils used by several network port of the switch. How Ansible can create a checkpoint with ""random"" content ? ##### STEPS TO REPRODUCE I don't have the command history anymore in my term and I don't have spare N7K to reproduce the bug. But I think It might append with any failed commands which result in a rollback from the checkpoint created at the beginning of the run. ``` ansible-playbook playbooks/config_common.yml -k ``` ##### EXPECTED RESULTS Expected content of the checkpoint : ``` [NEXUS7K]# show rollback log exec Operation : Rollback to Checkpoint Checkpoint name : ansible_1479404111 Rollback done By : kpersonnic Rollback mode : atomic Verbose : disabled Start Time : Thu, 17:35:12 17 Nov 2016 End Time : Thu, 17:35:32 17 Nov 2016 Rollback Status : Success ... Executing Patch: ---------------- `conf t` `no aaa group server radius RAD_10.10.50.9` ``` ##### ACTUAL RESULTS Actual content of the checkpoint executed by ansible ``` [NEXUS7K]# show rollback log exec Operation : Rollback to Checkpoint Checkpoint name : ansible_1479404111 Rollback done By : kpersonnic Rollback mode : atomic Verbose : disabled Start Time : Thu, 17:35:12 17 Nov 2016 End Time : Thu, 17:35:32 17 Nov 2016 Rollback Status : Success ... Executing Patch: ---------------- `conf t` `port-profile type ethernet FRONT_ESX_ETH1-ETH3` `no switchport mode` `exit` `port-profile type ethernet FRONT_ESX_ETH0-ETH2` `no switchport mode` `exit` `no aaa group server radius RAD_10.10.50.9` ``` ",1,nxos config rollbacked on abnormal checkpoint created by ansible issue type bug report component name nxos config ansible version ansible config file home users kpersonnic projects ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment nxos software bios version kickstart version system version bios compile time kickstart image file is bootflash kickstart npe bin kickstart compile time system image file is bootflash npe bin system compile time hardware cisco slot chassis supervisor module intel r xeon r cpu with kb of memory summary hi while writing a new role for nexus switches dedicated to radius configuration i executed a non working version of my role as expected ansible rollbacked the change when it encountered an error but the ckeckpoint created by ansible contained unexpected commands lines which unconfigured two ports profils used by several network port of the switch how ansible can create a checkpoint with random content steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i don t have the command history anymore in my term and i don t have spare to reproduce the bug but i think it might append with any failed commands which result in a rollback from the checkpoint created at the beginning of the run ansible playbook playbooks config common yml k expected results expected content of the checkpoint show rollback log exec operation rollback to checkpoint checkpoint name ansible rollback done by kpersonnic rollback mode atomic verbose disabled start time thu nov end time thu nov rollback status success executing patch conf t no aaa group server radius rad actual results actual content of the checkpoint executed by ansible show rollback log exec operation rollback to checkpoint checkpoint name ansible rollback done by kpersonnic rollback mode atomic verbose disabled start time thu nov end time thu nov rollback status success executing patch conf t port profile type ethernet front esx no switchport mode exit port profile type ethernet front esx no switchport mode exit no aaa group server radius rad ,1 913,4581950419.0,IssuesEvent,2016-09-19 08:24:51,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,eos_config: `.updates` not defined when using `src:` - improve docs,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_config ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (eos_cmd_v_item 15cf123420) last updated 2016/09/13 12:04:55 (GMT +100) lib/ansible/modules/core: (devel ae6992bf8c) last updated 2016/09/13 09:19:01 (GMT +100) lib/ansible/modules/extras: (devel 1f6f3b72db) last updated 2016/09/13 09:19:10 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY in `eos_template` when using `src:` we return `.updates` (assuming there are any) In `eos_config` (the replacement) we only return `.updates` when the module has been called with `lines:`. I was under the impression that `_config` should have feature parity with the older `_template` modules This *may* be defined as a feature request, rather than a bug. Also from looking at the code this may well apply to all `_config` modules. ##### STEPS TO REPRODUCE ``` - name: configure device with config eos_config: src: basic/config.j2 provider: ""{{ cli }}"" register: result - name: ""XOXO debug"" debug: msg: ""{{ result }}"" ``` ##### EXPECTED RESULTS `.updates` to be returned when there are changes ##### ACTUAL RESULTS ``` ok: [veos01] => { ""msg"": { ""changed"": true, ""diff"": { ""prepared"": ""--- system:/running-config\n+++ session:/ansible_1473770349-session-config\n@@ -35,6 +35,8 @@\n shutdown\n !\n interface Ethernet5\n+ description this is a test\n+ shutdown\n !\n interface Ethernet6\n shutdown\n"" }, ""warnings"": [] } } ``` ",True,"eos_config: `.updates` not defined when using `src:` - improve docs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_config ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (eos_cmd_v_item 15cf123420) last updated 2016/09/13 12:04:55 (GMT +100) lib/ansible/modules/core: (devel ae6992bf8c) last updated 2016/09/13 09:19:01 (GMT +100) lib/ansible/modules/extras: (devel 1f6f3b72db) last updated 2016/09/13 09:19:10 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY in `eos_template` when using `src:` we return `.updates` (assuming there are any) In `eos_config` (the replacement) we only return `.updates` when the module has been called with `lines:`. I was under the impression that `_config` should have feature parity with the older `_template` modules This *may* be defined as a feature request, rather than a bug. Also from looking at the code this may well apply to all `_config` modules. ##### STEPS TO REPRODUCE ``` - name: configure device with config eos_config: src: basic/config.j2 provider: ""{{ cli }}"" register: result - name: ""XOXO debug"" debug: msg: ""{{ result }}"" ``` ##### EXPECTED RESULTS `.updates` to be returned when there are changes ##### ACTUAL RESULTS ``` ok: [veos01] => { ""msg"": { ""changed"": true, ""diff"": { ""prepared"": ""--- system:/running-config\n+++ session:/ansible_1473770349-session-config\n@@ -35,6 +35,8 @@\n shutdown\n !\n interface Ethernet5\n+ description this is a test\n+ shutdown\n !\n interface Ethernet6\n shutdown\n"" }, ""warnings"": [] } } ``` ",1,eos config updates not defined when using src improve docs issue type bug report component name eos config ansible version ansible version ansible eos cmd v item last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary in eos template when using src we return updates assuming there are any in eos config the replacement we only return updates when the module has been called with lines i was under the impression that config should have feature parity with the older template modules this may be defined as a feature request rather than a bug also from looking at the code this may well apply to all config modules steps to reproduce name configure device with config eos config src basic config provider cli register result name xoxo debug debug msg result expected results updates to be returned when there are changes actual results ok msg changed true diff prepared system running config n session ansible session config n n shutdown n n interface n description this is a test n shutdown n n interface n shutdown n warnings ,1 1813,6577312022.0,IssuesEvent,2017-09-12 00:01:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,FR: yum module should support sub commands such as 'cache',affects_2.1 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible-playbook 2.1.0.0 config file = /Users/g.lynch/git/tos/ansible_role_yum/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT CentOS 6.7, 6.8, 7.x ##### SUMMARY In order to flush/make the cache when _not_ installing a package, it's currently necessary to call yum directly using `command` or `shell`. This results in a `[WARNING]: consider using yum module rather than running yum`. The warning is rather annoying in this situation. ##### STEPS TO REPRODUCE Create a role to manage yum repositories - use yum_repositories - finish with flushing the cache, currently have to use command/shell e.g. ``` --- - name: configure yum template: src: ""{{ yum_config|basename }}.j2"" dest: ""{{ yum_config }}"" - name: configure all yum repositories yum_repository: baseurl: ""{{ item.baseurl|default(omit) }}"" description: ""{{ item.description|default('The ' + item.name + ' repository') }}"" enabled: ""{{ item.enabled|default(True) }}"" gpgcakey: ""{{ item.gpgcakey|default(omit) }}"" gpgcheck: ""{{ item.gpgcheck|default(False) }}"" gpgkey: ""{{ item.gpgkey|default(omit) }}"" mirrorlist: ""{{ item.mirrorlist|default(omit) }}"" name: ""{{ item.name }}"" state: ""{{ item.state|default('present') }}"" with_flattened: - ""{{ yum_repos_base }}"" - ""{{ yum_repos_apps }}"" # NOTE: currently the yum module does not support cache actions by themselves - name: clean yum cache shell: yum clean all when: yum_clean_all|bool ``` ##### EXPECTED RESULTS Ability to use yum module with cache command handling. ##### ACTUAL RESULTS ``` TASK [ansible_role_yum : clean yum cache] ************************************** changed: [default] [WARNING]: Consider using yum module rather than running yum ``` ",True,"FR: yum module should support sub commands such as 'cache' - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible-playbook 2.1.0.0 config file = /Users/g.lynch/git/tos/ansible_role_yum/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT CentOS 6.7, 6.8, 7.x ##### SUMMARY In order to flush/make the cache when _not_ installing a package, it's currently necessary to call yum directly using `command` or `shell`. This results in a `[WARNING]: consider using yum module rather than running yum`. The warning is rather annoying in this situation. ##### STEPS TO REPRODUCE Create a role to manage yum repositories - use yum_repositories - finish with flushing the cache, currently have to use command/shell e.g. ``` --- - name: configure yum template: src: ""{{ yum_config|basename }}.j2"" dest: ""{{ yum_config }}"" - name: configure all yum repositories yum_repository: baseurl: ""{{ item.baseurl|default(omit) }}"" description: ""{{ item.description|default('The ' + item.name + ' repository') }}"" enabled: ""{{ item.enabled|default(True) }}"" gpgcakey: ""{{ item.gpgcakey|default(omit) }}"" gpgcheck: ""{{ item.gpgcheck|default(False) }}"" gpgkey: ""{{ item.gpgkey|default(omit) }}"" mirrorlist: ""{{ item.mirrorlist|default(omit) }}"" name: ""{{ item.name }}"" state: ""{{ item.state|default('present') }}"" with_flattened: - ""{{ yum_repos_base }}"" - ""{{ yum_repos_apps }}"" # NOTE: currently the yum module does not support cache actions by themselves - name: clean yum cache shell: yum clean all when: yum_clean_all|bool ``` ##### EXPECTED RESULTS Ability to use yum module with cache command handling. ##### ACTUAL RESULTS ``` TASK [ansible_role_yum : clean yum cache] ************************************** changed: [default] [WARNING]: Consider using yum module rather than running yum ``` ",1,fr yum module should support sub commands such as cache issue type feature idea component name yum ansible version ansible playbook config file users g lynch git tos ansible role yum ansible cfg configured module search path default w o overrides configuration os environment centos x summary in order to flush make the cache when not installing a package it s currently necessary to call yum directly using command or shell this results in a consider using yum module rather than running yum the warning is rather annoying in this situation steps to reproduce create a role to manage yum repositories use yum repositories finish with flushing the cache currently have to use command shell e g name configure yum template src yum config basename dest yum config name configure all yum repositories yum repository baseurl item baseurl default omit description item description default the item name repository enabled item enabled default true gpgcakey item gpgcakey default omit gpgcheck item gpgcheck default false gpgkey item gpgkey default omit mirrorlist item mirrorlist default omit name item name state item state default present with flattened yum repos base yum repos apps note currently the yum module does not support cache actions by themselves name clean yum cache shell yum clean all when yum clean all bool expected results ability to use yum module with cache command handling actual results task changed consider using yum module rather than running yum ,1 1609,6572623808.0,IssuesEvent,2017-09-11 03:50:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Upload binary file possibility in URI module,affects_1.9 feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: uri ##### Ansible Version: 1.9.2 ##### Environment: N/A ##### Summary: It would be nice to have option to upload binary file to HTTP endpoint using HTTP PUT. Expecting behaviour similar to following curl command: ``` curl -k -X PUT -u user:password --upload-file test.jar ""https://IP:PORT/endpoint/"" ``` and example of possible URI usage: ``` - name: Push jar to nexus uri: url: ""https://{{ inventory_hostname }}:{{ port }}/{{ endpoint }}"" src: /tmp/{{ file }}.jar user: ""{{ user }}"" password: ""{{ password }}"" method: PUT ``` ",True,"Upload binary file possibility in URI module - ##### Issue Type: - Feature Idea ##### Plugin Name: uri ##### Ansible Version: 1.9.2 ##### Environment: N/A ##### Summary: It would be nice to have option to upload binary file to HTTP endpoint using HTTP PUT. Expecting behaviour similar to following curl command: ``` curl -k -X PUT -u user:password --upload-file test.jar ""https://IP:PORT/endpoint/"" ``` and example of possible URI usage: ``` - name: Push jar to nexus uri: url: ""https://{{ inventory_hostname }}:{{ port }}/{{ endpoint }}"" src: /tmp/{{ file }}.jar user: ""{{ user }}"" password: ""{{ password }}"" method: PUT ``` ",1,upload binary file possibility in uri module issue type feature idea plugin name uri ansible version environment n a summary it would be nice to have option to upload binary file to http endpoint using http put expecting behaviour similar to following curl command curl k x put u user password upload file test jar and example of possible uri usage name push jar to nexus uri url inventory hostname port endpoint src tmp file jar user user password password method put ,1 984,4750365740.0,IssuesEvent,2016-10-22 09:34:28,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,AttributeError: 'NoneType' object has no attribute 'describe_alarms' when creating new ec2_metric_alarm,affects_1.8 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME ec2_metric_alarm module ##### ANSIBLE VERSION ansible (1.8.4) ##### SUMMARY Versions run on: ansible (1.8.4) boto (2.36.0) botocore (0.93.0) Playbook run: - name: Create a Cloudwatch metric alarm for up scaling and associate it with a Scaling Policy ec2_metric_alarm: name: Product 15.1.1.0.0 opsdev upscale description: Triggered when the CPU of a node is more than 50% for 5 minutes namespace: ""AWS/EC2"" metric: CPUUtilization comparison: "">"" threshold: 50.0 unit: Percent period: 100 evaluation_periods: 3 statistic: Average dimensions: {'AutoScalingGroupName':'Product opsdev'} state: present alarm_actions: ""{{ sp_up_result.arn }}"" The sp_up_result variable is an ec2_scaling_policy task register as demonstrated in this link (http://stackoverflow.com/questions/24686407/unable-to-retrieve-aws-scaling-policy-information-from-ec2-scaling-policy-module). Error is the following: failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm"", line 2069, in main() File ""/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm"", line 2065, in main create_metric_alarm(connection, module) File ""/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm"", line 1934, in create_metric_alarm alarms = connection.describe_alarms(alarm_names=[name]) AttributeError: 'NoneType' object has no attribute 'describe_alarms' ",True,"AttributeError: 'NoneType' object has no attribute 'describe_alarms' when creating new ec2_metric_alarm - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME ec2_metric_alarm module ##### ANSIBLE VERSION ansible (1.8.4) ##### SUMMARY Versions run on: ansible (1.8.4) boto (2.36.0) botocore (0.93.0) Playbook run: - name: Create a Cloudwatch metric alarm for up scaling and associate it with a Scaling Policy ec2_metric_alarm: name: Product 15.1.1.0.0 opsdev upscale description: Triggered when the CPU of a node is more than 50% for 5 minutes namespace: ""AWS/EC2"" metric: CPUUtilization comparison: "">"" threshold: 50.0 unit: Percent period: 100 evaluation_periods: 3 statistic: Average dimensions: {'AutoScalingGroupName':'Product opsdev'} state: present alarm_actions: ""{{ sp_up_result.arn }}"" The sp_up_result variable is an ec2_scaling_policy task register as demonstrated in this link (http://stackoverflow.com/questions/24686407/unable-to-retrieve-aws-scaling-policy-information-from-ec2-scaling-policy-module). Error is the following: failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm"", line 2069, in main() File ""/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm"", line 2065, in main create_metric_alarm(connection, module) File ""/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm"", line 1934, in create_metric_alarm alarms = connection.describe_alarms(alarm_names=[name]) AttributeError: 'NoneType' object has no attribute 'describe_alarms' ",1,attributeerror nonetype object has no attribute describe alarms when creating new metric alarm issue type bug report component name metric alarm module ansible version ansible summary versions run on ansible boto botocore playbook run name create a cloudwatch metric alarm for up scaling and associate it with a scaling policy metric alarm name product opsdev upscale description triggered when the cpu of a node is more than for minutes namespace aws metric cpuutilization comparison threshold unit percent period evaluation periods statistic average dimensions autoscalinggroupname product opsdev state present alarm actions sp up result arn the sp up result variable is an scaling policy task register as demonstrated in this link error is the following failed failed true parsed false traceback most recent call last file home lostmimic ansible tmp ansible tmp metric alarm line in main file home lostmimic ansible tmp ansible tmp metric alarm line in main create metric alarm connection module file home lostmimic ansible tmp ansible tmp metric alarm line in create metric alarm alarms connection describe alarms alarm names attributeerror nonetype object has no attribute describe alarms ,1 1113,4988930209.0,IssuesEvent,2016-12-08 10:06:21,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,template_module: does not fail anymore when source file is absent,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME template module ##### ANSIBLE VERSION 2.1.0 ##### SUMMARY Issue Type: Bug Report Ansible Version: ansible-playbook 2.1.0 (devel 45355cd566) last updated 2016/01/08 15:07:35 (GMT +200) Environment: Ubuntu 15.04 Problem: I thing the expected behaviour is that Ansible fails on runtime when a source file is missing. The old error looks like this: ``` fatal: [{{hostname}}] => input file not found at /{foobar}/roles/foobar/templates/etc/apt/foo/bar.j2 or /{foobar}/etc/apt/foo/bar.j2 ``` Using the development version a non-existing source file is just ignored and marked as green during runtime - the destination file isn't touched at all. ",True,"template_module: does not fail anymore when source file is absent - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME template module ##### ANSIBLE VERSION 2.1.0 ##### SUMMARY Issue Type: Bug Report Ansible Version: ansible-playbook 2.1.0 (devel 45355cd566) last updated 2016/01/08 15:07:35 (GMT +200) Environment: Ubuntu 15.04 Problem: I thing the expected behaviour is that Ansible fails on runtime when a source file is missing. The old error looks like this: ``` fatal: [{{hostname}}] => input file not found at /{foobar}/roles/foobar/templates/etc/apt/foo/bar.j2 or /{foobar}/etc/apt/foo/bar.j2 ``` Using the development version a non-existing source file is just ignored and marked as green during runtime - the destination file isn't touched at all. ",1,template module does not fail anymore when source file is absent issue type bug report component name template module ansible version summary issue type bug report ansible version ansible playbook devel last updated gmt environment ubuntu problem i thing the expected behaviour is that ansible fails on runtime when a source file is missing the old error looks like this fatal input file not found at foobar roles foobar templates etc apt foo bar or foobar etc apt foo bar using the development version a non existing source file is just ignored and marked as green during runtime the destination file isn t touched at all ,1 746,4350941179.0,IssuesEvent,2016-07-31 15:30:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,digital_ocean backup_enabled: yes is enabling private_networking,bug_report cloud digital_ocean waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME digital_ocean ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.0.2.0 config file = /Users/jln/git/jpw/black-ops/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Attempting to create a DO droplet with private_networking and back_up enabled. If I set only private_networking: yes, private networking is not enabled on the droplet. If I set only backups_enabled: yes, backups are not enabled, but private networking is. If I set both private_networking: yes and backups_enabled: yes, private networking is enabled but backups are not. ##### STEPS TO REPRODUCE ``` - name: create database droplet digital_ocean: state: present command: droplet name: testWithNetworking size_id: 2gb region_id: nyc3 image_id: ubuntu-14-04-x64 unique_name: yes private_networking: yes register: droplet - name: create database droplet digital_ocean: state: present command: droplet name: testWithBackups size_id: 2gb region_id: nyc3 image_id: ubuntu-14-04-x64 unique_name: yes backups_enabled: yes register: droplet - name: create database droplet digital_ocean: state: present command: droplet name: testWithBoth size_id: 2gb region_id: nyc3 image_id: ubuntu-14-04-x64 unique_name: yes private_networking: yes backups_enabled: yes register: droplet ``` ##### EXPECTED RESULTS I expected three droplets: * One with Private Networking enabled, but not Backups. * One with Backups enabled, but not Private Networking. * One with Private Networking and Backups enabled. ##### ACTUAL RESULTS ![image](https://cloud.githubusercontent.com/assets/3778233/14833019/10217c7a-0bcb-11e6-874b-32e88241d04c.png) ![image](https://cloud.githubusercontent.com/assets/3778233/14833008/fcfb4888-0bca-11e6-8a8c-0c3039486766.png) ![image](https://cloud.githubusercontent.com/assets/3778233/14833033/26cb467c-0bcb-11e6-9eb1-4c54b8714e4b.png) ``` Using /Users/jln/git/jpw/black-ops/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: create_db_server.yml ************************************************* 2 plays in create_db_server.yml PLAY [local] ******************************************************************* TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmpureuWb TO /Users/jln/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626/setup localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626/setup; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626/"" > /dev/null 2>&1' ok: [localhost] TASK [droplet : create database droplet] *************************************** task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:10 ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmp1K1RbK TO /Users/jln/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174/digital_ocean localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174/digital_ocean; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174/"" > /dev/null 2>&1' changed: [localhost] => {""changed"": true, ""droplet"": {""backup_ids"": [], ""created_at"": ""2016-04-26T20:13:25Z"", ""disk"": 40, ""drives"": [], ""features"": [""ipv6"", ""virtio""], ""id"": 14185559, ""image"": {""created_at"": ""2016-02-23T23:01:40Z"", ""distribution"": ""Ubuntu"", ""id"": 15943679, ""min_disk_size"": 20, ""name"": ""14.04.4 x64"", ""public"": true, ""regions"": [""nyc1"", ""sfo1"", ""nyc2"", ""ams2"", ""sgp1"", ""lon1"", ""nyc3"", ""ams3"", ""fra1"", ""tor1"", ""sfo2""], ""size_gigabytes"": 1.58, ""slug"": ""ubuntu-14-04-x64"", ""type"": ""snapshot""}, ""ip_address"": ""104.236.71.85"", ""kernel"": null, ""locked"": false, ""memory"": 2048, ""name"": ""testWithNetworking"", ""networks"": {""v4"": [{""gateway"": ""104.236.64.1"", ""ip_address"": ""104.236.71.85"", ""netmask"": ""255.255.192.0"", ""type"": ""public""}], ""v6"": [{""gateway"": ""2604:A880:0800:0010:0000:0000:0000:0001"", ""ip_address"": ""2604:A880:0800:0010:0000:0000:0B1E:1001"", ""netmask"": 64, ""type"": ""public""}]}, ""next_backup_window"": null, ""region"": {""available"": true, ""features"": [""private_networking"", ""backups"", ""ipv6"", ""metadata""], ""name"": ""New York 3"", ""sizes"": [""32gb"", ""16gb"", ""2gb"", ""1gb"", ""4gb"", ""8gb"", ""512mb"", ""64gb"", ""48gb""], ""slug"": ""nyc3""}, ""size"": {""available"": true, ""disk"": 40, ""memory"": 2048, ""price_hourly"": 0.02976, ""price_monthly"": 20.0, ""regions"": [""ams1"", ""ams2"", ""ams3"", ""fra1"", ""lon1"", ""nyc1"", ""nyc2"", ""nyc3"", ""sfo1"", ""sgp1"", ""tor1""], ""slug"": ""2gb"", ""transfer"": 3.0, ""vcpus"": 2}, ""size_slug"": ""2gb"", ""snapshot_ids"": [], ""status"": ""active"", ""tags"": [], ""vcpus"": 2}, ""invocation"": {""module_args"": {""api_token"": null, ""backups_enabled"": false, ""command"": ""droplet"", ""id"": null, ""image_id"": ""ubuntu-14-04-x64"", ""name"": ""testWithNetworking"", ""private_networking"": true, ""region_id"": ""nyc3"", ""size_id"": ""2gb"", ""ssh_key_ids"": null, ""ssh_pub_key"": null, ""state"": ""present"", ""unique_name"": true, ""user_data"": null, ""virtio"": true, ""wait"": true, ""wait_timeout"": 300}, ""module_name"": ""digital_ocean""}} TASK [droplet : create database droplet] *************************************** task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:22 ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmp62vX08 TO /Users/jln/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555/digital_ocean localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555/digital_ocean; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555/"" > /dev/null 2>&1' changed: [localhost] => {""changed"": true, ""droplet"": {""backup_ids"": [], ""created_at"": ""2016-04-26T20:14:09Z"", ""disk"": 40, ""drives"": [], ""features"": [""private_networking"", ""virtio""], ""id"": 14185593, ""image"": {""created_at"": ""2016-02-23T23:01:40Z"", ""distribution"": ""Ubuntu"", ""id"": 15943679, ""min_disk_size"": 20, ""name"": ""14.04.4 x64"", ""public"": true, ""regions"": [""nyc1"", ""sfo1"", ""nyc2"", ""ams2"", ""sgp1"", ""lon1"", ""nyc3"", ""ams3"", ""fra1"", ""tor1"", ""sfo2""], ""size_gigabytes"": 1.58, ""slug"": ""ubuntu-14-04-x64"", ""type"": ""snapshot""}, ""ip_address"": ""10.132.15.191"", ""kernel"": null, ""locked"": false, ""memory"": 2048, ""name"": ""testWithBackups"", ""networks"": {""v4"": [{""gateway"": ""10.132.0.1"", ""ip_address"": ""10.132.15.191"", ""netmask"": ""255.255.0.0"", ""type"": ""private""}, {""gateway"": ""104.236.192.1"", ""ip_address"": ""104.236.236.56"", ""netmask"": ""255.255.192.0"", ""type"": ""public""}], ""v6"": []}, ""next_backup_window"": null, ""region"": {""available"": true, ""features"": [""private_networking"", ""backups"", ""ipv6"", ""metadata""], ""name"": ""New York 3"", ""sizes"": [""32gb"", ""16gb"", ""2gb"", ""1gb"", ""4gb"", ""8gb"", ""512mb"", ""64gb"", ""48gb""], ""slug"": ""nyc3""}, ""size"": {""available"": true, ""disk"": 40, ""memory"": 2048, ""price_hourly"": 0.02976, ""price_monthly"": 20.0, ""regions"": [""ams1"", ""ams2"", ""ams3"", ""fra1"", ""lon1"", ""nyc1"", ""nyc2"", ""nyc3"", ""sfo1"", ""sgp1"", ""tor1""], ""slug"": ""2gb"", ""transfer"": 3.0, ""vcpus"": 2}, ""size_slug"": ""2gb"", ""snapshot_ids"": [], ""status"": ""active"", ""tags"": [], ""vcpus"": 2}, ""invocation"": {""module_args"": {""api_token"": null, ""backups_enabled"": true, ""command"": ""droplet"", ""id"": null, ""image_id"": ""ubuntu-14-04-x64"", ""name"": ""testWithBackups"", ""private_networking"": false, ""region_id"": ""nyc3"", ""size_id"": ""2gb"", ""ssh_key_ids"": null, ""ssh_pub_key"": null, ""state"": ""present"", ""unique_name"": true, ""user_data"": null, ""virtio"": true, ""wait"": true, ""wait_timeout"": 300}, ""module_name"": ""digital_ocean""}} TASK [droplet : create database droplet] *************************************** task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:34 ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmppdM6x9 TO /Users/jln/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663/digital_ocean localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663/digital_ocean; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663/"" > /dev/null 2>&1' changed: [localhost] => {""changed"": true, ""droplet"": {""backup_ids"": [], ""created_at"": ""2016-04-26T20:14:52Z"", ""disk"": 40, ""drives"": [], ""features"": [""private_networking"", ""ipv6"", ""virtio""], ""id"": 14185606, ""image"": {""created_at"": ""2016-02-23T23:01:40Z"", ""distribution"": ""Ubuntu"", ""id"": 15943679, ""min_disk_size"": 20, ""name"": ""14.04.4 x64"", ""public"": true, ""regions"": [""nyc1"", ""sfo1"", ""nyc2"", ""ams2"", ""sgp1"", ""lon1"", ""nyc3"", ""ams3"", ""fra1"", ""tor1"", ""sfo2""], ""size_gigabytes"": 1.58, ""slug"": ""ubuntu-14-04-x64"", ""type"": ""snapshot""}, ""ip_address"": ""10.132.2.33"", ""kernel"": null, ""locked"": false, ""memory"": 2048, ""name"": ""testWithBoth"", ""networks"": {""v4"": [{""gateway"": ""10.132.0.1"", ""ip_address"": ""10.132.2.33"", ""netmask"": ""255.255.0.0"", ""type"": ""private""}, {""gateway"": ""104.131.0.1"", ""ip_address"": ""104.131.52.215"", ""netmask"": ""255.255.192.0"", ""type"": ""public""}], ""v6"": [{""gateway"": ""2604:A880:0800:0010:0000:0000:0000:0001"", ""ip_address"": ""2604:A880:0800:0010:0000:0000:0B1E:2001"", ""netmask"": 64, ""type"": ""public""}]}, ""next_backup_window"": null, ""region"": {""available"": true, ""features"": [""private_networking"", ""backups"", ""ipv6"", ""metadata""], ""name"": ""New York 3"", ""sizes"": [""32gb"", ""16gb"", ""2gb"", ""1gb"", ""4gb"", ""8gb"", ""512mb"", ""64gb"", ""48gb""], ""slug"": ""nyc3""}, ""size"": {""available"": true, ""disk"": 40, ""memory"": 2048, ""price_hourly"": 0.02976, ""price_monthly"": 20.0, ""regions"": [""ams1"", ""ams2"", ""ams3"", ""fra1"", ""lon1"", ""nyc1"", ""nyc2"", ""nyc3"", ""sfo1"", ""sgp1"", ""tor1""], ""slug"": ""2gb"", ""transfer"": 3.0, ""vcpus"": 2}, ""size_slug"": ""2gb"", ""snapshot_ids"": [], ""status"": ""active"", ""tags"": [], ""vcpus"": 2}, ""invocation"": {""module_args"": {""api_token"": null, ""backups_enabled"": true, ""command"": ""droplet"", ""id"": null, ""image_id"": ""ubuntu-14-04-x64"", ""name"": ""testWithBoth"", ""private_networking"": true, ""region_id"": ""nyc3"", ""size_id"": ""2gb"", ""ssh_key_ids"": null, ""ssh_pub_key"": null, ""state"": ""present"", ""unique_name"": true, ""user_data"": null, ""virtio"": true, ""wait"": true, ""wait_timeout"": 300}, ""module_name"": ""digital_ocean""}} TASK [droplet : debug] ********************************************************* task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:61 ok: [localhost] => { ""msg"": ""testWithBoth(14185606) - 10.132.2.33"" } PLAY [db] ********************************************************************** skipping: no hosts matched PLAY RECAP ********************************************************************* localhost : ok=5 changed=3 unreachable=0 failed=0```",True,"digital_ocean backup_enabled: yes is enabling private_networking - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME digital_ocean ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.0.2.0 config file = /Users/jln/git/jpw/black-ops/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Attempting to create a DO droplet with private_networking and back_up enabled. If I set only private_networking: yes, private networking is not enabled on the droplet. If I set only backups_enabled: yes, backups are not enabled, but private networking is. If I set both private_networking: yes and backups_enabled: yes, private networking is enabled but backups are not. ##### STEPS TO REPRODUCE ``` - name: create database droplet digital_ocean: state: present command: droplet name: testWithNetworking size_id: 2gb region_id: nyc3 image_id: ubuntu-14-04-x64 unique_name: yes private_networking: yes register: droplet - name: create database droplet digital_ocean: state: present command: droplet name: testWithBackups size_id: 2gb region_id: nyc3 image_id: ubuntu-14-04-x64 unique_name: yes backups_enabled: yes register: droplet - name: create database droplet digital_ocean: state: present command: droplet name: testWithBoth size_id: 2gb region_id: nyc3 image_id: ubuntu-14-04-x64 unique_name: yes private_networking: yes backups_enabled: yes register: droplet ``` ##### EXPECTED RESULTS I expected three droplets: * One with Private Networking enabled, but not Backups. * One with Backups enabled, but not Private Networking. * One with Private Networking and Backups enabled. ##### ACTUAL RESULTS ![image](https://cloud.githubusercontent.com/assets/3778233/14833019/10217c7a-0bcb-11e6-874b-32e88241d04c.png) ![image](https://cloud.githubusercontent.com/assets/3778233/14833008/fcfb4888-0bca-11e6-8a8c-0c3039486766.png) ![image](https://cloud.githubusercontent.com/assets/3778233/14833033/26cb467c-0bcb-11e6-9eb1-4c54b8714e4b.png) ``` Using /Users/jln/git/jpw/black-ops/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: create_db_server.yml ************************************************* 2 plays in create_db_server.yml PLAY [local] ******************************************************************* TASK [setup] ******************************************************************* ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmpureuWb TO /Users/jln/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626/setup localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626/setup; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701603.74-47521402779626/"" > /dev/null 2>&1' ok: [localhost] TASK [droplet : create database droplet] *************************************** task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:10 ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmp1K1RbK TO /Users/jln/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174/digital_ocean localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174/digital_ocean; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701604.63-264720731229174/"" > /dev/null 2>&1' changed: [localhost] => {""changed"": true, ""droplet"": {""backup_ids"": [], ""created_at"": ""2016-04-26T20:13:25Z"", ""disk"": 40, ""drives"": [], ""features"": [""ipv6"", ""virtio""], ""id"": 14185559, ""image"": {""created_at"": ""2016-02-23T23:01:40Z"", ""distribution"": ""Ubuntu"", ""id"": 15943679, ""min_disk_size"": 20, ""name"": ""14.04.4 x64"", ""public"": true, ""regions"": [""nyc1"", ""sfo1"", ""nyc2"", ""ams2"", ""sgp1"", ""lon1"", ""nyc3"", ""ams3"", ""fra1"", ""tor1"", ""sfo2""], ""size_gigabytes"": 1.58, ""slug"": ""ubuntu-14-04-x64"", ""type"": ""snapshot""}, ""ip_address"": ""104.236.71.85"", ""kernel"": null, ""locked"": false, ""memory"": 2048, ""name"": ""testWithNetworking"", ""networks"": {""v4"": [{""gateway"": ""104.236.64.1"", ""ip_address"": ""104.236.71.85"", ""netmask"": ""255.255.192.0"", ""type"": ""public""}], ""v6"": [{""gateway"": ""2604:A880:0800:0010:0000:0000:0000:0001"", ""ip_address"": ""2604:A880:0800:0010:0000:0000:0B1E:1001"", ""netmask"": 64, ""type"": ""public""}]}, ""next_backup_window"": null, ""region"": {""available"": true, ""features"": [""private_networking"", ""backups"", ""ipv6"", ""metadata""], ""name"": ""New York 3"", ""sizes"": [""32gb"", ""16gb"", ""2gb"", ""1gb"", ""4gb"", ""8gb"", ""512mb"", ""64gb"", ""48gb""], ""slug"": ""nyc3""}, ""size"": {""available"": true, ""disk"": 40, ""memory"": 2048, ""price_hourly"": 0.02976, ""price_monthly"": 20.0, ""regions"": [""ams1"", ""ams2"", ""ams3"", ""fra1"", ""lon1"", ""nyc1"", ""nyc2"", ""nyc3"", ""sfo1"", ""sgp1"", ""tor1""], ""slug"": ""2gb"", ""transfer"": 3.0, ""vcpus"": 2}, ""size_slug"": ""2gb"", ""snapshot_ids"": [], ""status"": ""active"", ""tags"": [], ""vcpus"": 2}, ""invocation"": {""module_args"": {""api_token"": null, ""backups_enabled"": false, ""command"": ""droplet"", ""id"": null, ""image_id"": ""ubuntu-14-04-x64"", ""name"": ""testWithNetworking"", ""private_networking"": true, ""region_id"": ""nyc3"", ""size_id"": ""2gb"", ""ssh_key_ids"": null, ""ssh_pub_key"": null, ""state"": ""present"", ""unique_name"": true, ""user_data"": null, ""virtio"": true, ""wait"": true, ""wait_timeout"": 300}, ""module_name"": ""digital_ocean""}} TASK [droplet : create database droplet] *************************************** task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:22 ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmp62vX08 TO /Users/jln/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555/digital_ocean localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555/digital_ocean; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701648.46-30799619553555/"" > /dev/null 2>&1' changed: [localhost] => {""changed"": true, ""droplet"": {""backup_ids"": [], ""created_at"": ""2016-04-26T20:14:09Z"", ""disk"": 40, ""drives"": [], ""features"": [""private_networking"", ""virtio""], ""id"": 14185593, ""image"": {""created_at"": ""2016-02-23T23:01:40Z"", ""distribution"": ""Ubuntu"", ""id"": 15943679, ""min_disk_size"": 20, ""name"": ""14.04.4 x64"", ""public"": true, ""regions"": [""nyc1"", ""sfo1"", ""nyc2"", ""ams2"", ""sgp1"", ""lon1"", ""nyc3"", ""ams3"", ""fra1"", ""tor1"", ""sfo2""], ""size_gigabytes"": 1.58, ""slug"": ""ubuntu-14-04-x64"", ""type"": ""snapshot""}, ""ip_address"": ""10.132.15.191"", ""kernel"": null, ""locked"": false, ""memory"": 2048, ""name"": ""testWithBackups"", ""networks"": {""v4"": [{""gateway"": ""10.132.0.1"", ""ip_address"": ""10.132.15.191"", ""netmask"": ""255.255.0.0"", ""type"": ""private""}, {""gateway"": ""104.236.192.1"", ""ip_address"": ""104.236.236.56"", ""netmask"": ""255.255.192.0"", ""type"": ""public""}], ""v6"": []}, ""next_backup_window"": null, ""region"": {""available"": true, ""features"": [""private_networking"", ""backups"", ""ipv6"", ""metadata""], ""name"": ""New York 3"", ""sizes"": [""32gb"", ""16gb"", ""2gb"", ""1gb"", ""4gb"", ""8gb"", ""512mb"", ""64gb"", ""48gb""], ""slug"": ""nyc3""}, ""size"": {""available"": true, ""disk"": 40, ""memory"": 2048, ""price_hourly"": 0.02976, ""price_monthly"": 20.0, ""regions"": [""ams1"", ""ams2"", ""ams3"", ""fra1"", ""lon1"", ""nyc1"", ""nyc2"", ""nyc3"", ""sfo1"", ""sgp1"", ""tor1""], ""slug"": ""2gb"", ""transfer"": 3.0, ""vcpus"": 2}, ""size_slug"": ""2gb"", ""snapshot_ids"": [], ""status"": ""active"", ""tags"": [], ""vcpus"": 2}, ""invocation"": {""module_args"": {""api_token"": null, ""backups_enabled"": true, ""command"": ""droplet"", ""id"": null, ""image_id"": ""ubuntu-14-04-x64"", ""name"": ""testWithBackups"", ""private_networking"": false, ""region_id"": ""nyc3"", ""size_id"": ""2gb"", ""ssh_key_ids"": null, ""ssh_pub_key"": null, ""state"": ""present"", ""unique_name"": true, ""user_data"": null, ""virtio"": true, ""wait"": true, ""wait_timeout"": 300}, ""module_name"": ""digital_ocean""}} TASK [droplet : create database droplet] *************************************** task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:34 ESTABLISH LOCAL CONNECTION FOR USER: jln localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663 `"" )' localhost PUT /var/folders/6g/mhwwnt7j3259lddvql8jhkbw0000gn/T/tmppdM6x9 TO /Users/jln/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663/digital_ocean localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /Users/jln/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663/digital_ocean; rm -rf ""/Users/jln/.ansible/tmp/ansible-tmp-1461701690.74-258804638768663/"" > /dev/null 2>&1' changed: [localhost] => {""changed"": true, ""droplet"": {""backup_ids"": [], ""created_at"": ""2016-04-26T20:14:52Z"", ""disk"": 40, ""drives"": [], ""features"": [""private_networking"", ""ipv6"", ""virtio""], ""id"": 14185606, ""image"": {""created_at"": ""2016-02-23T23:01:40Z"", ""distribution"": ""Ubuntu"", ""id"": 15943679, ""min_disk_size"": 20, ""name"": ""14.04.4 x64"", ""public"": true, ""regions"": [""nyc1"", ""sfo1"", ""nyc2"", ""ams2"", ""sgp1"", ""lon1"", ""nyc3"", ""ams3"", ""fra1"", ""tor1"", ""sfo2""], ""size_gigabytes"": 1.58, ""slug"": ""ubuntu-14-04-x64"", ""type"": ""snapshot""}, ""ip_address"": ""10.132.2.33"", ""kernel"": null, ""locked"": false, ""memory"": 2048, ""name"": ""testWithBoth"", ""networks"": {""v4"": [{""gateway"": ""10.132.0.1"", ""ip_address"": ""10.132.2.33"", ""netmask"": ""255.255.0.0"", ""type"": ""private""}, {""gateway"": ""104.131.0.1"", ""ip_address"": ""104.131.52.215"", ""netmask"": ""255.255.192.0"", ""type"": ""public""}], ""v6"": [{""gateway"": ""2604:A880:0800:0010:0000:0000:0000:0001"", ""ip_address"": ""2604:A880:0800:0010:0000:0000:0B1E:2001"", ""netmask"": 64, ""type"": ""public""}]}, ""next_backup_window"": null, ""region"": {""available"": true, ""features"": [""private_networking"", ""backups"", ""ipv6"", ""metadata""], ""name"": ""New York 3"", ""sizes"": [""32gb"", ""16gb"", ""2gb"", ""1gb"", ""4gb"", ""8gb"", ""512mb"", ""64gb"", ""48gb""], ""slug"": ""nyc3""}, ""size"": {""available"": true, ""disk"": 40, ""memory"": 2048, ""price_hourly"": 0.02976, ""price_monthly"": 20.0, ""regions"": [""ams1"", ""ams2"", ""ams3"", ""fra1"", ""lon1"", ""nyc1"", ""nyc2"", ""nyc3"", ""sfo1"", ""sgp1"", ""tor1""], ""slug"": ""2gb"", ""transfer"": 3.0, ""vcpus"": 2}, ""size_slug"": ""2gb"", ""snapshot_ids"": [], ""status"": ""active"", ""tags"": [], ""vcpus"": 2}, ""invocation"": {""module_args"": {""api_token"": null, ""backups_enabled"": true, ""command"": ""droplet"", ""id"": null, ""image_id"": ""ubuntu-14-04-x64"", ""name"": ""testWithBoth"", ""private_networking"": true, ""region_id"": ""nyc3"", ""size_id"": ""2gb"", ""ssh_key_ids"": null, ""ssh_pub_key"": null, ""state"": ""present"", ""unique_name"": true, ""user_data"": null, ""virtio"": true, ""wait"": true, ""wait_timeout"": 300}, ""module_name"": ""digital_ocean""}} TASK [droplet : debug] ********************************************************* task path: /Users/jln/git/jpw/black-ops/roles/droplet/tasks/main.yml:61 ok: [localhost] => { ""msg"": ""testWithBoth(14185606) - 10.132.2.33"" } PLAY [db] ********************************************************************** skipping: no hosts matched PLAY RECAP ********************************************************************* localhost : ok=5 changed=3 unreachable=0 failed=0```",1,digital ocean backup enabled yes is enabling private networking issue type bug report component name digital ocean ansible version ansible version ansible config file users jln git jpw black ops ansible cfg configured module search path default w o overrides configuration host key checking false os environment control machine osx droplet ubuntu summary attempting to create a do droplet with private networking and back up enabled if i set only private networking yes private networking is not enabled on the droplet if i set only backups enabled yes backups are not enabled but private networking is if i set both private networking yes and backups enabled yes private networking is enabled but backups are not steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create database droplet digital ocean state present command droplet name testwithnetworking size id region id image id ubuntu unique name yes private networking yes register droplet name create database droplet digital ocean state present command droplet name testwithbackups size id region id image id ubuntu unique name yes backups enabled yes register droplet name create database droplet digital ocean state present command droplet name testwithboth size id region id image id ubuntu unique name yes private networking yes backups enabled yes register droplet expected results i expected three droplets one with private networking enabled but not backups one with backups enabled but not private networking one with private networking and backups enabled actual results using users jln git jpw black ops ansible cfg as config file loaded callback default of type stdout playbook create db server yml plays in create db server yml play task establish local connection for user jln localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders t tmpureuwb to users jln ansible tmp ansible tmp setup localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf python users jln ansible tmp ansible tmp setup rm rf users jln ansible tmp ansible tmp dev null ok task task path users jln git jpw black ops roles droplet tasks main yml establish local connection for user jln localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders t to users jln ansible tmp ansible tmp digital ocean localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf python users jln ansible tmp ansible tmp digital ocean rm rf users jln ansible tmp ansible tmp dev null changed changed true droplet backup ids created at disk drives features id image created at distribution ubuntu id min disk size name public true regions size gigabytes slug ubuntu type snapshot ip address kernel null locked false memory name testwithnetworking networks next backup window null region available true features name new york sizes slug size available true disk memory price hourly price monthly regions slug transfer vcpus size slug snapshot ids status active tags vcpus invocation module args api token null backups enabled false command droplet id null image id ubuntu name testwithnetworking private networking true region id size id ssh key ids null ssh pub key null state present unique name true user data null virtio true wait true wait timeout module name digital ocean task task path users jln git jpw black ops roles droplet tasks main yml establish local connection for user jln localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders t to users jln ansible tmp ansible tmp digital ocean localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf python users jln ansible tmp ansible tmp digital ocean rm rf users jln ansible tmp ansible tmp dev null changed changed true droplet backup ids created at disk drives features id image created at distribution ubuntu id min disk size name public true regions size gigabytes slug ubuntu type snapshot ip address kernel null locked false memory name testwithbackups networks next backup window null region available true features name new york sizes slug size available true disk memory price hourly price monthly regions slug transfer vcpus size slug snapshot ids status active tags vcpus invocation module args api token null backups enabled true command droplet id null image id ubuntu name testwithbackups private networking false region id size id ssh key ids null ssh pub key null state present unique name true user data null virtio true wait true wait timeout module name digital ocean task task path users jln git jpw black ops roles droplet tasks main yml establish local connection for user jln localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders t to users jln ansible tmp ansible tmp digital ocean localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf python users jln ansible tmp ansible tmp digital ocean rm rf users jln ansible tmp ansible tmp dev null changed changed true droplet backup ids created at disk drives features id image created at distribution ubuntu id min disk size name public true regions size gigabytes slug ubuntu type snapshot ip address kernel null locked false memory name testwithboth networks next backup window null region available true features name new york sizes slug size available true disk memory price hourly price monthly regions slug transfer vcpus size slug snapshot ids status active tags vcpus invocation module args api token null backups enabled true command droplet id null image id ubuntu name testwithboth private networking true region id size id ssh key ids null ssh pub key null state present unique name true user data null virtio true wait true wait timeout module name digital ocean task task path users jln git jpw black ops roles droplet tasks main yml ok msg testwithboth play skipping no hosts matched play recap localhost ok changed unreachable failed ,1 1889,6577532919.0,IssuesEvent,2017-09-12 01:34:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,iam_policy module errors out when using policy_json field,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: iam_policy ##### Ansible Version: 2.0.1.0 ##### Ansible Configuration: n/a ##### Environment: n/a ##### Summary: iam_policy module errors out if you use the policy_json field. PR #2730 introduced this bug. ##### Steps To Reproduce: Example: ``` - iam_policy: iam_type: role iam_name: my-role-name policy_name: my-policy-name policy_json: ""{{ lookup( 'template', 'policy.json.j2', convert_data=False) }}"" state: present ``` ##### Expected Results: Expected to succeed. ##### Actual Results: ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/root/.ansible/tmp/ansible-tmp-1457652747.27-251976645440330/iam_policy"", line 2528, in main() File ""/root/.ansible/tmp/ansible-tmp-1457652747.27-251976645440330/iam_policy"", line 306, in main if not isinstance(pdoc, basestring): UnboundLocalError: local variable 'pdoc' referenced before assignment fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""iam_policy""}, ""parsed"": false} ``` ",True,"iam_policy module errors out when using policy_json field - ##### Issue Type: - Bug Report ##### Plugin Name: iam_policy ##### Ansible Version: 2.0.1.0 ##### Ansible Configuration: n/a ##### Environment: n/a ##### Summary: iam_policy module errors out if you use the policy_json field. PR #2730 introduced this bug. ##### Steps To Reproduce: Example: ``` - iam_policy: iam_type: role iam_name: my-role-name policy_name: my-policy-name policy_json: ""{{ lookup( 'template', 'policy.json.j2', convert_data=False) }}"" state: present ``` ##### Expected Results: Expected to succeed. ##### Actual Results: ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/root/.ansible/tmp/ansible-tmp-1457652747.27-251976645440330/iam_policy"", line 2528, in main() File ""/root/.ansible/tmp/ansible-tmp-1457652747.27-251976645440330/iam_policy"", line 306, in main if not isinstance(pdoc, basestring): UnboundLocalError: local variable 'pdoc' referenced before assignment fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""iam_policy""}, ""parsed"": false} ``` ",1,iam policy module errors out when using policy json field issue type bug report plugin name iam policy ansible version ansible configuration n a environment n a summary iam policy module errors out if you use the policy json field pr introduced this bug steps to reproduce example iam policy iam type role iam name my role name policy name my policy name policy json lookup template policy json convert data false state present expected results expected to succeed actual results an exception occurred during task execution the full traceback is traceback most recent call last file root ansible tmp ansible tmp iam policy line in main file root ansible tmp ansible tmp iam policy line in main if not isinstance pdoc basestring unboundlocalerror local variable pdoc referenced before assignment fatal failed changed false failed true invocation module name iam policy parsed false ,1 1128,4998373040.0,IssuesEvent,2016-12-09 19:39:02,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,[ecs_taskdefinition] Invalid type for parameter containerDefinitions[0],affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `ec2_taskdefinition` ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /srv/code/ops/ansible/ansible.cfg configured module search path = ['./library'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OSX ##### SUMMARY An exception when trying to use variables for `cpu` and `memory` parameters in the `containers` list. ##### STEPS TO REPRODUCE Run playbook: ``` yml - hosts: localhost connection: local gather_facts: false vars: - myservice: cpu: 512 memory: 512 tasks: - ecs_taskdefinition: state: present family: ""foo-taskdef"" containers: - name: myservice essential: true cpu: ""{{ myservice.cpu | int }}"" image: ""myimage:latest"" memory: ""{{ myservice.memory | int }}"" ``` ##### EXPECTED RESULTS Create a new ECS task-definition with 512 cpu and memory. ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py"", line 221, in main() File ""/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py"", line 196, in main module.params['containers'], volumes) File ""/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py"", line 134, in register_task containerDefinitions=container_definitions, volumes=volumes) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py"", line 278, in _api_call return self._make_api_call(operation_name, kwargs) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py"", line 548, in _make_api_call api_params, operation_model, context=request_context) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py"", line 601, in _convert_to_request_dict api_params, operation_model) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/validate.py"", line 270, in serialize_to_request raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid type for parameter containerDefinitions[0].cpu, value: 512, type: , valid types: , Invalid type for parameter containerDefinitions[0].memory, value: 512, type: , valid types: , ``` ",True,"[ecs_taskdefinition] Invalid type for parameter containerDefinitions[0] - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `ec2_taskdefinition` ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /srv/code/ops/ansible/ansible.cfg configured module search path = ['./library'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OSX ##### SUMMARY An exception when trying to use variables for `cpu` and `memory` parameters in the `containers` list. ##### STEPS TO REPRODUCE Run playbook: ``` yml - hosts: localhost connection: local gather_facts: false vars: - myservice: cpu: 512 memory: 512 tasks: - ecs_taskdefinition: state: present family: ""foo-taskdef"" containers: - name: myservice essential: true cpu: ""{{ myservice.cpu | int }}"" image: ""myimage:latest"" memory: ""{{ myservice.memory | int }}"" ``` ##### EXPECTED RESULTS Create a new ECS task-definition with 512 cpu and memory. ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py"", line 221, in main() File ""/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py"", line 196, in main module.params['containers'], volumes) File ""/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py"", line 134, in register_task containerDefinitions=container_definitions, volumes=volumes) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py"", line 278, in _api_call return self._make_api_call(operation_name, kwargs) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py"", line 548, in _make_api_call api_params, operation_model, context=request_context) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py"", line 601, in _convert_to_request_dict api_params, operation_model) File ""/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/validate.py"", line 270, in serialize_to_request raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid type for parameter containerDefinitions[0].cpu, value: 512, type: , valid types: , Invalid type for parameter containerDefinitions[0].memory, value: 512, type: , valid types: , ``` ",1, invalid type for parameter containerdefinitions issue type bug report component name taskdefinition ansible version ansible config file srv code ops ansible ansible cfg configured module search path configuration os environment osx summary an exception when trying to use variables for cpu and memory parameters in the containers list steps to reproduce run playbook yml hosts localhost connection local gather facts false vars myservice cpu memory tasks ecs taskdefinition state present family foo taskdef containers name myservice essential true cpu myservice cpu int image myimage latest memory myservice memory int expected results create a new ecs task definition with cpu and memory actual results an exception occurred during task execution the full traceback is traceback most recent call last file var folders t t ansible ansible module ecs taskdefinition py line in main file var folders t t ansible ansible module ecs taskdefinition py line in main module params volumes file var folders t t ansible ansible module ecs taskdefinition py line in register task containerdefinitions container definitions volumes volumes file users rafi local share python envs ansible lib site packages botocore client py line in api call return self make api call operation name kwargs file users rafi local share python envs ansible lib site packages botocore client py line in make api call api params operation model context request context file users rafi local share python envs ansible lib site packages botocore client py line in convert to request dict api params operation model file users rafi local share python envs ansible lib site packages botocore validate py line in serialize to request raise paramvalidationerror report report generate report botocore exceptions paramvalidationerror parameter validation failed invalid type for parameter containerdefinitions cpu value type valid types invalid type for parameter containerdefinitions memory value type valid types ,1 1776,6575809393.0,IssuesEvent,2017-09-11 17:24:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive does not recognize size differs,affects_2.1 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing unusual or special about configuration, simply using defaults. ##### OS / ENVIRONMENT CentOS 6.6 32bit ##### SUMMARY The unarchive task does not consider `Size differs` in the output of gtar in it's is_unarchived method, which perhaps was by initial design or just not thought of. As an enhancement it would be nice to add control over if `Size differs` is considered. The situation that this would be used for is when you have a custom built archive that needs to replace existing files, for example: configuration or python scripts or static website files/resources. I did see a ticket on adding a checksum option so perhaps the above situation could be resolved using that feature instead, in which case this is a duplicate or won't fix. Finally perhaps using the force parameter as a way to always unarchive would also be a solution. This doesn't seem to be the current case, if it is then it should be documented. Here is a snippet from the unarchive.py is_unarchived method and the output of gtar --diff when the file size differs. ``` [root@niu01 v-ui-crs]# gtar --diff -vf /opt/xpressworkx/app-manager/apps/web/application-files.tar application application/ application/__init__.py: Size differs application/configure.py: Mod time differs application/configure.py: Size differs application/override.py: Size differs ``` From /usr/lib/python2.6/site-packages/ansible/modules/core/files/unarchive.py ``` def is_unarchived(self): cmd = '%s -C ""%s"" -d%s' % (self.cmd_path, self.dest, self.zipflag) if self.opts: cmd += ' ' + ' '.join(self.opts) if self.file_args['owner']: cmd += ' --owner=""%s""' % self.file_args['owner'] if self.file_args['group']: cmd += ' --group=""%s""' % self.file_args['group'] if self.file_args['mode']: cmd += ' --mode=""%s""' % self.file_args['mode'] if self.module.params['keep_newer']: cmd += ' --keep-newer-files' if self.excludes: cmd += ' --exclude=""' + '"" --exclude=""'.join(self.excludes) + '""' cmd += ' -f ""%s""' % self.src rc, out, err = self.module.run_command(cmd) .... .... ``` ##### STEPS TO REPRODUCE Simply use an unarchive task in a situation where you need to replace existing files of the same name, owner, group, and mode. ``` - name: Unpack Static Files unarchive: src: ""{{ role_path }}/../../web/sdist/static-files.tar"" dest: ""{{ web_config.data['file_root'] }}"" owner: ""{{ web_config.data['web_user'] }}"" ``` ##### EXPECTED RESULTS Unarchive the tar into the destination even though files of the same name, owner, group and mode exist. ##### ACTUAL RESULTS Task returned ok as in already run. ``` TASK [web-ui-common : Unpack Application Static Files] ************************* ok: [192.168.124.152] ``` ",True,"unarchive does not recognize size differs - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing unusual or special about configuration, simply using defaults. ##### OS / ENVIRONMENT CentOS 6.6 32bit ##### SUMMARY The unarchive task does not consider `Size differs` in the output of gtar in it's is_unarchived method, which perhaps was by initial design or just not thought of. As an enhancement it would be nice to add control over if `Size differs` is considered. The situation that this would be used for is when you have a custom built archive that needs to replace existing files, for example: configuration or python scripts or static website files/resources. I did see a ticket on adding a checksum option so perhaps the above situation could be resolved using that feature instead, in which case this is a duplicate or won't fix. Finally perhaps using the force parameter as a way to always unarchive would also be a solution. This doesn't seem to be the current case, if it is then it should be documented. Here is a snippet from the unarchive.py is_unarchived method and the output of gtar --diff when the file size differs. ``` [root@niu01 v-ui-crs]# gtar --diff -vf /opt/xpressworkx/app-manager/apps/web/application-files.tar application application/ application/__init__.py: Size differs application/configure.py: Mod time differs application/configure.py: Size differs application/override.py: Size differs ``` From /usr/lib/python2.6/site-packages/ansible/modules/core/files/unarchive.py ``` def is_unarchived(self): cmd = '%s -C ""%s"" -d%s' % (self.cmd_path, self.dest, self.zipflag) if self.opts: cmd += ' ' + ' '.join(self.opts) if self.file_args['owner']: cmd += ' --owner=""%s""' % self.file_args['owner'] if self.file_args['group']: cmd += ' --group=""%s""' % self.file_args['group'] if self.file_args['mode']: cmd += ' --mode=""%s""' % self.file_args['mode'] if self.module.params['keep_newer']: cmd += ' --keep-newer-files' if self.excludes: cmd += ' --exclude=""' + '"" --exclude=""'.join(self.excludes) + '""' cmd += ' -f ""%s""' % self.src rc, out, err = self.module.run_command(cmd) .... .... ``` ##### STEPS TO REPRODUCE Simply use an unarchive task in a situation where you need to replace existing files of the same name, owner, group, and mode. ``` - name: Unpack Static Files unarchive: src: ""{{ role_path }}/../../web/sdist/static-files.tar"" dest: ""{{ web_config.data['file_root'] }}"" owner: ""{{ web_config.data['web_user'] }}"" ``` ##### EXPECTED RESULTS Unarchive the tar into the destination even though files of the same name, owner, group and mode exist. ##### ACTUAL RESULTS Task returned ok as in already run. ``` TASK [web-ui-common : Unpack Application Static Files] ************************* ok: [192.168.124.152] ``` ",1,unarchive does not recognize size differs issue type feature idea component name unarchive ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables nothing unusual or special about configuration simply using defaults os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific centos summary the unarchive task does not consider size differs in the output of gtar in it s is unarchived method which perhaps was by initial design or just not thought of as an enhancement it would be nice to add control over if size differs is considered the situation that this would be used for is when you have a custom built archive that needs to replace existing files for example configuration or python scripts or static website files resources i did see a ticket on adding a checksum option so perhaps the above situation could be resolved using that feature instead in which case this is a duplicate or won t fix finally perhaps using the force parameter as a way to always unarchive would also be a solution this doesn t seem to be the current case if it is then it should be documented here is a snippet from the unarchive py is unarchived method and the output of gtar diff when the file size differs gtar diff vf opt xpressworkx app manager apps web application files tar application application application init py size differs application configure py mod time differs application configure py size differs application override py size differs from usr lib site packages ansible modules core files unarchive py def is unarchived self cmd s c s d s self cmd path self dest self zipflag if self opts cmd join self opts if self file args cmd owner s self file args if self file args cmd group s self file args if self file args cmd mode s self file args if self module params cmd keep newer files if self excludes cmd exclude exclude join self excludes cmd f s self src rc out err self module run command cmd steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used simply use an unarchive task in a situation where you need to replace existing files of the same name owner group and mode name unpack static files unarchive src role path web sdist static files tar dest web config data owner web config data expected results unarchive the tar into the destination even though files of the same name owner group and mode exist actual results task returned ok as in already run task ok ,1 740,4348690121.0,IssuesEvent,2016-07-30 03:05:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Using backup and with_items can loose the original file as you get the backup of the next to last item,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ini_file module ##### ANSIBLE VERSION ansible 2.1.0 ##### CONFIGURATION ##### OS / ENVIRONMENT N/A (Linux, Debian Stable, ansible from pip) ##### SUMMARY When modifying a single file with multiple changes using ini_file and with_items, together with the `backup=yes` option, the (possibly created) backup is overwritten by subsequent loop-runs. I had a task that modified php.ini in various sections, after a run I noticed there was 1 backup file besides the original file (and a backup I created manually before-hand, just to test). The created backup file already had the 1st change, and the new php.ini had both changes. My guess is that since both tasks ran in the same second, the backup-file name was the same on both runs, and therefore the original backup was overwritten. A single backup-file would be preferred... but I'm guessing every single with_items call will make a new backup file (possibly overwriting the same file every time). ##### STEPS TO REPRODUCE ``` - name: Configure php.ini settings ini_file: dest=/etc/php.ini owner=root group=root mode=0644 backup=yes section={{item.section}} option={{item.option}} value={{item.value}} with_items: - { section: ""Date"", option: ""date.timezone"", value: ""{{timezone_name}}"" } - { section: ""Session"", option: ""session.gc_maxlifetime"", value: ""{{php_session_gc_maxlifetime|default(1440)}}"" } ``` ##### EXPECTED RESULTS A single backup file, which is unchanged from the original. ##### ACTUAL RESULTS ``` -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini.2016-07-13@11:04:08~ -rw-r--r-- 1 root root 69097 Jul 13 11:03 php.ini.my-own-backup ``` ",True,"Using backup and with_items can loose the original file as you get the backup of the next to last item - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ini_file module ##### ANSIBLE VERSION ansible 2.1.0 ##### CONFIGURATION ##### OS / ENVIRONMENT N/A (Linux, Debian Stable, ansible from pip) ##### SUMMARY When modifying a single file with multiple changes using ini_file and with_items, together with the `backup=yes` option, the (possibly created) backup is overwritten by subsequent loop-runs. I had a task that modified php.ini in various sections, after a run I noticed there was 1 backup file besides the original file (and a backup I created manually before-hand, just to test). The created backup file already had the 1st change, and the new php.ini had both changes. My guess is that since both tasks ran in the same second, the backup-file name was the same on both runs, and therefore the original backup was overwritten. A single backup-file would be preferred... but I'm guessing every single with_items call will make a new backup file (possibly overwriting the same file every time). ##### STEPS TO REPRODUCE ``` - name: Configure php.ini settings ini_file: dest=/etc/php.ini owner=root group=root mode=0644 backup=yes section={{item.section}} option={{item.option}} value={{item.value}} with_items: - { section: ""Date"", option: ""date.timezone"", value: ""{{timezone_name}}"" } - { section: ""Session"", option: ""session.gc_maxlifetime"", value: ""{{php_session_gc_maxlifetime|default(1440)}}"" } ``` ##### EXPECTED RESULTS A single backup file, which is unchanged from the original. ##### ACTUAL RESULTS ``` -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini.2016-07-13@11:04:08~ -rw-r--r-- 1 root root 69097 Jul 13 11:03 php.ini.my-own-backup ``` ",1,using backup and with items can loose the original file as you get the backup of the next to last item issue type bug report component name ini file module ansible version ansible configuration os environment n a linux debian stable ansible from pip summary when modifying a single file with multiple changes using ini file and with items together with the backup yes option the possibly created backup is overwritten by subsequent loop runs i had a task that modified php ini in various sections after a run i noticed there was backup file besides the original file and a backup i created manually before hand just to test the created backup file already had the change and the new php ini had both changes my guess is that since both tasks ran in the same second the backup file name was the same on both runs and therefore the original backup was overwritten a single backup file would be preferred but i m guessing every single with items call will make a new backup file possibly overwriting the same file every time steps to reproduce name configure php ini settings ini file dest etc php ini owner root group root mode backup yes section item section option item option value item value with items section date option date timezone value timezone name section session option session gc maxlifetime value php session gc maxlifetime default expected results a single backup file which is unchanged from the original actual results rw r r root root jul php ini rw r r root root jul php ini rw r r root root jul php ini my own backup ,1 1030,4827515754.0,IssuesEvent,2016-11-07 13:52:44,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,temporary AWS Access Keys results in auth failures,affects_1.9 aws bug_report cloud waiting_on_maintainer,"#### Issue Type: Bug Report #### Component Name: ec2 module #### Ansible Version: ansible 1.9.4 #### Ansible Configuration: none #### Environment: Mac OSX 10.11 / Not applicable #### Summary: Using temporary AWS Access Keys results in auth failures #### Steps to reproduce: generate temporary access keys (eg. via STS or SAML provider) attempt to create ec2 resource #### Expected results: ec2 resource is created #### Actual results: AWS AuthFailure exception I originally lodged this against ansible/ansible as https://github.com/ansible/ansible/issues/12959 but I think maybe this should be resolved in the core modules code. There's a comment in the code that suggests, that perhaps the modules just need to be modified to use `connect_to_aws()` ``` def get_ec2_creds(module): ''' for compatibility mode with old modules that don't/can't yet use ec2_connect method ''' ``` ``` failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 3070, in main() File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 1249, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 792, in create_instances vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id File ""/Library/Python/2.7/site-packages/boto/vpc/__init__.py"", line 1153, in get_all_subnets return self.get_list('DescribeSubnets', params, [('item', Subnet)]) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1186, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized AuthFailureAWS was not able to validate the provided access credentialsbcce0f14-b8d4-46e0-a582-17993365b18b from my investigation the issue appears in the module_utils/ec2.py get_ec2_creds( ) which returns ec2_url, boto_params['aws_access_key_id'], boto_params['aws_secret_access_key'], region ``` since the aws_access_key_id in this example will only work with a security_token, the method is effectively broken. I think the function should at least warn when it detects a security token or an access key that starts with ASIA instead of AKIA. ",True,"temporary AWS Access Keys results in auth failures - #### Issue Type: Bug Report #### Component Name: ec2 module #### Ansible Version: ansible 1.9.4 #### Ansible Configuration: none #### Environment: Mac OSX 10.11 / Not applicable #### Summary: Using temporary AWS Access Keys results in auth failures #### Steps to reproduce: generate temporary access keys (eg. via STS or SAML provider) attempt to create ec2 resource #### Expected results: ec2 resource is created #### Actual results: AWS AuthFailure exception I originally lodged this against ansible/ansible as https://github.com/ansible/ansible/issues/12959 but I think maybe this should be resolved in the core modules code. There's a comment in the code that suggests, that perhaps the modules just need to be modified to use `connect_to_aws()` ``` def get_ec2_creds(module): ''' for compatibility mode with old modules that don't/can't yet use ec2_connect method ''' ``` ``` failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 3070, in main() File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 1249, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 792, in create_instances vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id File ""/Library/Python/2.7/site-packages/boto/vpc/__init__.py"", line 1153, in get_all_subnets return self.get_list('DescribeSubnets', params, [('item', Subnet)]) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1186, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized AuthFailureAWS was not able to validate the provided access credentialsbcce0f14-b8d4-46e0-a582-17993365b18b from my investigation the issue appears in the module_utils/ec2.py get_ec2_creds( ) which returns ec2_url, boto_params['aws_access_key_id'], boto_params['aws_secret_access_key'], region ``` since the aws_access_key_id in this example will only work with a security_token, the method is effectively broken. I think the function should at least warn when it detects a security token or an access key that starts with ASIA instead of AKIA. ",1,temporary aws access keys results in auth failures issue type bug report component name module ansible version ansible ansible configuration none environment mac osx not applicable summary using temporary aws access keys results in auth failures steps to reproduce generate temporary access keys eg via sts or saml provider attempt to create resource expected results resource is created actual results aws authfailure exception i originally lodged this against ansible ansible as but i think maybe this should be resolved in the core modules code there s a comment in the code that suggests that perhaps the modules just need to be modified to use connect to aws def get creds module for compatibility mode with old modules that don t can t yet use connect method failed failed true parsed false traceback most recent call last file users secole ansible tmp ansible tmp line in main file users secole ansible tmp ansible tmp line in main instance dict array new instance ids changed create instances module vpc file users secole ansible tmp ansible tmp line in create instances vpc id vpc get all subnets subnet ids vpc id file library python site packages boto vpc init py line in get all subnets return self get list describesubnets params file library python site packages boto connection py line in get list raise self responseerror response status response reason body boto exception unauthorized authfailure aws was not able to validate the provided access credentials from my investigation the issue appears in the module utils py get creds which returns url boto params boto params region since the aws access key id in this example will only work with a security token the method is effectively broken i think the function should at least warn when it detects a security token or an access key that starts with asia instead of akia ,1 1767,6575035021.0,IssuesEvent,2017-09-11 14:50:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,os_server module problem with floating_ips and multiple subnets,affects_2.2 cloud feature_idea openstack waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_server ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION host_key_checking = False force_color = 1 scp_if_ssh = True and also set the vault_password_file and inventory args ##### OS / ENVIRONMENT I'm both running and managing a RedHat 7.2 ##### SUMMARY I'm creating instances on OpenStack with the ansible module os_server. I have the floating ips already assigned before executing ansible, so I want to use the floating_ips argument, so the module would assign the floating ips automatically. I have 2 networks in my tenant, each one with its subnet, and the floating ip's have to be assigned to the correct subnet. The problem is that with the floating_ips argument, I can't establish the subnet that the floating ip should be assigned to. It only works if I specify 1 floating IP. It gets correctly assigned, but then I have to assign the second one manually, or using the 'os_floating_ip' module. When I specify both floating ip's in the argument, I get the following message : ``` failed: [localhost] (item=1) => {""extra_data"": null, ""failed"": true, ""item"": ""1"", ""msg"": ""unable to bind a floating ip to server *******: External network ********* is not reachable from subnet ********. Therefore, cannot associate Port ************ with a Floating IP.\nNeutron server returns request_ids: ['req-************']""} ``` ##### STEPS TO REPRODUCE You have to be in a tenant where there are more than one subnet defined and then use the os_server module, defining the floating_ips argument with a floating IP for each subnet. ##### EXPECTED RESULTS I expect the floating ip's to be assigned to the correct subnets ##### ACTUAL RESULTS ``` failed: [localhost] (item=1) => {""extra_data"": null, ""failed"": true, ""item"": ""1"", ""msg"": ""unable to bind a floating ip to server *******: External network ********* is not reachable from subnet ********. Therefore, cannot associate Port ************ with a Floating IP.\nNeutron server returns request_ids: ['req-************']""} ``` ",True,"os_server module problem with floating_ips and multiple subnets - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_server ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION host_key_checking = False force_color = 1 scp_if_ssh = True and also set the vault_password_file and inventory args ##### OS / ENVIRONMENT I'm both running and managing a RedHat 7.2 ##### SUMMARY I'm creating instances on OpenStack with the ansible module os_server. I have the floating ips already assigned before executing ansible, so I want to use the floating_ips argument, so the module would assign the floating ips automatically. I have 2 networks in my tenant, each one with its subnet, and the floating ip's have to be assigned to the correct subnet. The problem is that with the floating_ips argument, I can't establish the subnet that the floating ip should be assigned to. It only works if I specify 1 floating IP. It gets correctly assigned, but then I have to assign the second one manually, or using the 'os_floating_ip' module. When I specify both floating ip's in the argument, I get the following message : ``` failed: [localhost] (item=1) => {""extra_data"": null, ""failed"": true, ""item"": ""1"", ""msg"": ""unable to bind a floating ip to server *******: External network ********* is not reachable from subnet ********. Therefore, cannot associate Port ************ with a Floating IP.\nNeutron server returns request_ids: ['req-************']""} ``` ##### STEPS TO REPRODUCE You have to be in a tenant where there are more than one subnet defined and then use the os_server module, defining the floating_ips argument with a floating IP for each subnet. ##### EXPECTED RESULTS I expect the floating ip's to be assigned to the correct subnets ##### ACTUAL RESULTS ``` failed: [localhost] (item=1) => {""extra_data"": null, ""failed"": true, ""item"": ""1"", ""msg"": ""unable to bind a floating ip to server *******: External network ********* is not reachable from subnet ********. Therefore, cannot associate Port ************ with a Floating IP.\nNeutron server returns request_ids: ['req-************']""} ``` ",1,os server module problem with floating ips and multiple subnets issue type feature idea component name os server ansible version ansible config file configured module search path default w o overrides configuration host key checking false force color scp if ssh true and also set the vault password file and inventory args os environment i m both running and managing a redhat summary i m creating instances on openstack with the ansible module os server i have the floating ips already assigned before executing ansible so i want to use the floating ips argument so the module would assign the floating ips automatically i have networks in my tenant each one with its subnet and the floating ip s have to be assigned to the correct subnet the problem is that with the floating ips argument i can t establish the subnet that the floating ip should be assigned to it only works if i specify floating ip it gets correctly assigned but then i have to assign the second one manually or using the os floating ip module when i specify both floating ip s in the argument i get the following message failed item extra data null failed true item msg unable to bind a floating ip to server external network is not reachable from subnet therefore cannot associate port with a floating ip nneutron server returns request ids steps to reproduce you have to be in a tenant where there are more than one subnet defined and then use the os server module defining the floating ips argument with a floating ip for each subnet expected results i expect the floating ip s to be assigned to the correct subnets actual results failed item extra data null failed true item msg unable to bind a floating ip to server external network is not reachable from subnet therefore cannot associate port with a floating ip nneutron server returns request ids ,1 1876,6577504784.0,IssuesEvent,2017-09-12 01:22:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Hostname not set correctly on Gentoo Linux with systemd,affects_2.0 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME hostname ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Host OS: N/A Target OS: Gentoo Linux ##### SUMMARY The hostname task only updates /etc/conf.d/hostname, no difference what init are using: OpenRC or systemd. This pull request can be relevant to this bug https://github.com/ansible/ansible-modules-core/pull/2341 See also issue https://github.com/ansible/ansible-modules-core/issues/573 ##### STEPS TO REPRODUCE 1. Set hostname using the hostname task on target Gentoo Linux with systemd. 2. Check /etc/hostname 3. Check /etc/conf.d/hostname ##### EXPECTED RESULTS If target OS is Gentoo Linux with systemd then SystemdStrategy should be used, not OpenRCStrategy. Module should works as expected and should checks if systemd is active, like in module service.py ##### ACTUAL RESULTS Actually only /etc/conf.d/hostname changed. ",True,"Hostname not set correctly on Gentoo Linux with systemd - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME hostname ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Host OS: N/A Target OS: Gentoo Linux ##### SUMMARY The hostname task only updates /etc/conf.d/hostname, no difference what init are using: OpenRC or systemd. This pull request can be relevant to this bug https://github.com/ansible/ansible-modules-core/pull/2341 See also issue https://github.com/ansible/ansible-modules-core/issues/573 ##### STEPS TO REPRODUCE 1. Set hostname using the hostname task on target Gentoo Linux with systemd. 2. Check /etc/hostname 3. Check /etc/conf.d/hostname ##### EXPECTED RESULTS If target OS is Gentoo Linux with systemd then SystemdStrategy should be used, not OpenRCStrategy. Module should works as expected and should checks if systemd is active, like in module service.py ##### ACTUAL RESULTS Actually only /etc/conf.d/hostname changed. ",1,hostname not set correctly on gentoo linux with systemd issue type bug report component name hostname ansible version ansible config file configured module search path default w o overrides os environment host os n a target os gentoo linux summary the hostname task only updates etc conf d hostname no difference what init are using openrc or systemd this pull request can be relevant to this bug see also issue steps to reproduce set hostname using the hostname task on target gentoo linux with systemd check etc hostname check etc conf d hostname expected results if target os is gentoo linux with systemd then systemdstrategy should be used not openrcstrategy module should works as expected and should checks if systemd is active like in module service py actual results actually only etc conf d hostname changed ,1 867,4536112572.0,IssuesEvent,2016-09-08 19:23:55,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,get_url + file:/// protocol is broken,bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME get_url module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0 (devel de9e70e590) last updated 2016/04/21 10:44:10 (GMT -400) lib/ansible/modules/core: (detached HEAD 2256ae0793) last updated 2016/04/21 10:49:47 (GMT -400) lib/ansible/modules/extras: (detached HEAD 14c323cc8e) last updated 2016/04/21 10:49:47 (GMT -400) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION NA ##### OS / ENVIRONMENT ansible from checkout local connection mode centos7 VM ##### SUMMARY get_url doesn't handle the status code of None returned by urllib for the file:// protocol and fails unexpectedly. ##### STEPS TO REPRODUCE ``` - hosts: localhost connection: local gather_facts: False tasks: - copy: dest=/tmp/afile.txt content=""foobar"" - get_url: url=""file:///tmp/afile.txt"" dest=/tmp/afilecopy.txt force=yes ``` ##### EXPECTED RESULTS No task failures. ##### ACTUAL RESULTS ``` $ ansible-playbook -v geturltest.yml No config file found; using defaults [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAY [localhost] *************************************************************** TASK [copy] ******************************************************************** ok: [localhost] => {""changed"": false, ""checksum"": ""8843d7f92416211de9ebb963ff4ce28125932878"", ""dest"": ""/tmp/afile.txt"", ""gid"": 20, ""group"": ""staff"", ""mode"": ""0644"", ""owner"": ""jtanner"", ""path"": ""/tmp/afile.txt"", ""size"": 6, ""state"": ""file"", ""uid"": 501} TASK [get_url] ***************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/afilecopy.txt"", ""failed"": true, ""msg"": ""Request failed"", ""response"": ""OK (6 bytes)"", ""state"": ""absent"", ""status_code"": null, ""url"": ""file:///tmp/afile.txt""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @geturltest.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ",True,"get_url + file:/// protocol is broken - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME get_url module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0 (devel de9e70e590) last updated 2016/04/21 10:44:10 (GMT -400) lib/ansible/modules/core: (detached HEAD 2256ae0793) last updated 2016/04/21 10:49:47 (GMT -400) lib/ansible/modules/extras: (detached HEAD 14c323cc8e) last updated 2016/04/21 10:49:47 (GMT -400) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION NA ##### OS / ENVIRONMENT ansible from checkout local connection mode centos7 VM ##### SUMMARY get_url doesn't handle the status code of None returned by urllib for the file:// protocol and fails unexpectedly. ##### STEPS TO REPRODUCE ``` - hosts: localhost connection: local gather_facts: False tasks: - copy: dest=/tmp/afile.txt content=""foobar"" - get_url: url=""file:///tmp/afile.txt"" dest=/tmp/afilecopy.txt force=yes ``` ##### EXPECTED RESULTS No task failures. ##### ACTUAL RESULTS ``` $ ansible-playbook -v geturltest.yml No config file found; using defaults [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAY [localhost] *************************************************************** TASK [copy] ******************************************************************** ok: [localhost] => {""changed"": false, ""checksum"": ""8843d7f92416211de9ebb963ff4ce28125932878"", ""dest"": ""/tmp/afile.txt"", ""gid"": 20, ""group"": ""staff"", ""mode"": ""0644"", ""owner"": ""jtanner"", ""path"": ""/tmp/afile.txt"", ""size"": 6, ""state"": ""file"", ""uid"": 501} TASK [get_url] ***************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/afilecopy.txt"", ""failed"": true, ""msg"": ""Request failed"", ""response"": ""OK (6 bytes)"", ""state"": ""absent"", ""status_code"": null, ""url"": ""file:///tmp/afile.txt""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @geturltest.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` ",1,get url file protocol is broken issue type bug report component name get url module ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration na os environment ansible from checkout local connection mode vm summary get url doesn t handle the status code of none returned by urllib for the file protocol and fails unexpectedly steps to reproduce hosts localhost connection local gather facts false tasks copy dest tmp afile txt content foobar get url url file tmp afile txt dest tmp afilecopy txt force yes expected results no task failures actual results ansible playbook v geturltest yml no config file found using defaults host file not found etc ansible hosts provided hosts list is empty only localhost is available play task ok changed false checksum dest tmp afile txt gid group staff mode owner jtanner path tmp afile txt size state file uid task fatal failed changed false dest tmp afilecopy txt failed true msg request failed response ok bytes state absent status code null url file tmp afile txt no more hosts left to retry use limit geturltest retry play recap localhost ok changed unreachable failed ,1 1733,6574850183.0,IssuesEvent,2017-09-11 14:17:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Azure storage account facts module returns no facts at all,affects_2.1 azure bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `azure_rm_storageaccount_facts` ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /usr/src/playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Ubuntu 14.04 LTS ##### SUMMARY The module returns no useful information and provides no value. I have to fall back to the `azure-cli` `node.js` module in order to retrieve the storage account keys and manually parse the text output. ##### STEPS TO REPRODUCE Just run this: ``` - name: Get azure facts azure_rm_storageaccount_facts: name: ""{{ account_name }}"" resource_group: ""{{ resource_group }}"" client_id: ""{{ client_id }}"" secret: ""{{ client_secret }}"" subscription_id: ""{{ subscription_id }}"" tenant: ""{{ tenant_id }}"" ``` ##### EXPECTED RESULTS Account access keys, blob/containers if any, etc. Basically, useful information about the storage account, which I could use. I'd expect the exact same capabilities as the official `azure-cli` by Microsoft, buy I'd settle for simple stuff. The fact that this doesn't return anything useful makes the `azure_rm_storageaccount` useless for automation, since you can create/destroy storage accounts but you can't get their access keys. ##### ACTUAL RESULTS ``` ok: [localhost] => { ""out_account_facts"": { ""ansible_facts"": { ""azure_storageaccounts"": [ { ""location"": ""eastus"", ""tags"": {} } ] }, ""changed"": false } } ``` ",True,"Azure storage account facts module returns no facts at all - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `azure_rm_storageaccount_facts` ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /usr/src/playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT Ubuntu 14.04 LTS ##### SUMMARY The module returns no useful information and provides no value. I have to fall back to the `azure-cli` `node.js` module in order to retrieve the storage account keys and manually parse the text output. ##### STEPS TO REPRODUCE Just run this: ``` - name: Get azure facts azure_rm_storageaccount_facts: name: ""{{ account_name }}"" resource_group: ""{{ resource_group }}"" client_id: ""{{ client_id }}"" secret: ""{{ client_secret }}"" subscription_id: ""{{ subscription_id }}"" tenant: ""{{ tenant_id }}"" ``` ##### EXPECTED RESULTS Account access keys, blob/containers if any, etc. Basically, useful information about the storage account, which I could use. I'd expect the exact same capabilities as the official `azure-cli` by Microsoft, buy I'd settle for simple stuff. The fact that this doesn't return anything useful makes the `azure_rm_storageaccount` useless for automation, since you can create/destroy storage accounts but you can't get their access keys. ##### ACTUAL RESULTS ``` ok: [localhost] => { ""out_account_facts"": { ""ansible_facts"": { ""azure_storageaccounts"": [ { ""location"": ""eastus"", ""tags"": {} } ] }, ""changed"": false } } ``` ",1,azure storage account facts module returns no facts at all issue type bug report component name azure rm storageaccount facts ansible version ansible config file usr src playbooks ansible cfg configured module search path default w o overrides configuration default os environment ubuntu lts summary the module returns no useful information and provides no value i have to fall back to the azure cli node js module in order to retrieve the storage account keys and manually parse the text output steps to reproduce just run this name get azure facts azure rm storageaccount facts name account name resource group resource group client id client id secret client secret subscription id subscription id tenant tenant id expected results account access keys blob containers if any etc basically useful information about the storage account which i could use i d expect the exact same capabilities as the official azure cli by microsoft buy i d settle for simple stuff the fact that this doesn t return anything useful makes the azure rm storageaccount useless for automation since you can create destroy storage accounts but you can t get their access keys actual results ok out account facts ansible facts azure storageaccounts location eastus tags changed false ,1 1219,5199689645.0,IssuesEvent,2017-01-23 21:35:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,It would be nice for `copy` to optionally show a progress bar during upload,affects_2.0 feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: copy ##### Ansible Version: ``` ansible 2.0.1.0 config file = /home/.../ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: ``` $ cat ansible.cfg [defaults] inventory = hosts roles_path = roles/ ``` ##### Environment: N/A, not platform-specific ##### Summary: I have an ansible playbook which uses the `copy` module to upload a 3GB file to managed hosts. This takes a long time, and causes me to wonder whether it's still uploading or got stuck somewhere. It would be nice if the `copy` module had an option for showing a progress indicator while uploading. ##### Steps To Reproduce: How the new feature might be used: ``` - name: Upload Xcode DMG to the host copy: src={{ xcode_dmg }} dest=/Users/{{ ansible_ssh_user }} show_progress_bar=True when: xcode_dl_path.stat.exists == False ``` ##### Expected Results: It would be nice if the playbook output a progress indicator when the progress bar setting was enabled. ##### Actual Results: I read the docs and asked on IRC and daemoniz confirmed that there is not a built-in feature to do this. ",True,"It would be nice for `copy` to optionally show a progress bar during upload - ##### Issue Type: - Feature Idea ##### Plugin Name: copy ##### Ansible Version: ``` ansible 2.0.1.0 config file = /home/.../ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: ``` $ cat ansible.cfg [defaults] inventory = hosts roles_path = roles/ ``` ##### Environment: N/A, not platform-specific ##### Summary: I have an ansible playbook which uses the `copy` module to upload a 3GB file to managed hosts. This takes a long time, and causes me to wonder whether it's still uploading or got stuck somewhere. It would be nice if the `copy` module had an option for showing a progress indicator while uploading. ##### Steps To Reproduce: How the new feature might be used: ``` - name: Upload Xcode DMG to the host copy: src={{ xcode_dmg }} dest=/Users/{{ ansible_ssh_user }} show_progress_bar=True when: xcode_dl_path.stat.exists == False ``` ##### Expected Results: It would be nice if the playbook output a progress indicator when the progress bar setting was enabled. ##### Actual Results: I read the docs and asked on IRC and daemoniz confirmed that there is not a built-in feature to do this. ",1,it would be nice for copy to optionally show a progress bar during upload issue type feature idea plugin name copy ansible version ansible config file home ansible cfg configured module search path default w o overrides ansible configuration cat ansible cfg inventory hosts roles path roles environment n a not platform specific summary i have an ansible playbook which uses the copy module to upload a file to managed hosts this takes a long time and causes me to wonder whether it s still uploading or got stuck somewhere it would be nice if the copy module had an option for showing a progress indicator while uploading steps to reproduce how the new feature might be used name upload xcode dmg to the host copy src xcode dmg dest users ansible ssh user show progress bar true when xcode dl path stat exists false expected results it would be nice if the playbook output a progress indicator when the progress bar setting was enabled actual results i read the docs and asked on irc and daemoniz confirmed that there is not a built in feature to do this ,1 1634,6572657916.0,IssuesEvent,2017-09-11 04:08:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Fetch Silently Fails To Fetch File,affects_2.0 bug_report P2 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME fetch module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from Amazon Linux 2016.03 release Target is Windows Server 2012 R2 (recent patches applied) ##### SUMMARY Fetching an 80mbyte file from Windows Server silently fails. Ansible does not report an error, but the file never arrives. (Fetching the same file from a remote Linux server works fine, btw) ##### STEPS TO REPRODUCE ``` - name: fetch deliverable fetch: src=""{{ topleveldir }}\ult_{{ hostvars['localhost']['version'] }}_{{ hostvars['localhost']['soft'] }}_{{ hostvars['localhost']['target'] }}.zip"" dest=/home/ec2-user/files flat=yes fail_on_missing=yes - name: Output the name of the archive debug: msg=""archive - ult_{{ hostvars['localhost']['version'] }}_{{ hostvars['localhost']['soft'] }}_{{ hostvars['localhost']['target'] }}.zip"" ``` ##### EXPECTED RESULTS I expect the file to be placed on the control machine. ##### ACTUAL RESULTS The file never arrives but the results don't indicate a problem (afaict). ``` TASK [fetch deliverable] ******************************************************* task path: /build_scripts/build_noc_cust_win.yml:204 <52.36.54.12> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 52.36.54.12 <52.36.54.12> WINRM CONNECT: transport=ssl endpoint=https://52.36.54.12:5986/wsman <52.36.54.12> EXEC Set-StrictMode -Version Latest If (Test-Path -PathType Leaf ""C:\apps\ult_master_noc_win2012.zip"") { $sp = new-object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider; $fp = [System.IO.File]::Open(""C:\apps\ult_master_noc_win2012.zip"", [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read); [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace(""-"", """").ToLower(); $fp.Dispose(); } ElseIf (Test-Path -PathType Container ""C:\apps\ult_master_noc_win2012.zip"") { Write-Host ""3""; } Else { Write-Host ""1""; } <52.36.54.12> WINRM OPEN SHELL: E5D85769-78F7-40DF-9A5F-B98304A49455 <52.36.54.12> WINRM EXEC u'PowerShell' [u'-NoProfile', u'-NonInteractive', u'-ExecutionPolicy', u'Unrestricted', u'-EncodedCommand', u'UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgBJAGYAIAAoAFQAZQBzAHQALQBQAGEAdABoACAALQBQAGEAdABoAFQAeQBwAGUAIABMAGUAYQBmACAAIgBDADoAXABjAGEAcABzAGUAbgB0AGEAXAB1AGwAdAByAGEAdwByAGEAcABfAG0AYQBzAHQAZQByAF8AbgBvAGUAdABsAF8AdwBpAG4AMgAwADEAMgAuAHoAaQBwACIAKQAKAHsACgAkAHMAcAAgAD0AIABuAGUAdwAtAG8AYgBqAGUAYwB0ACAALQBUAHkAcABlAE4AYQBtAGUAIABTAHkAcwB0AGUAbQAuAFMAZQBjAHUAcgBpAHQAeQAuAEMAcgB5AHAAdABvAGcAcgBhAHAAaAB5AC4AUwBIAEEAMQBDAHIAeQBwAHQAbwBTAGUAcgB2AGkAYwBlAFAAcgBvAHYAaQBkAGUAcgA7AAoAJABmAHAAIAA9ACAAWwBTAHkAcwB0AGUAbQAuAEkATwAuAEYAaQBsAGUAXQA6dABhAFwAdQBsAHQAcgBhAHcAcgBhAHAAXwBtAGEAcwB0AGUAcgBfAG4AbwBlAHQAbABfAHcAaQBuADIAMAAxADIALgB6AGkAcAAiACwAIABbAFMAeQBzAHQAZQBtAC4ASQBPAC4ARgBpAGwAZQBtAG8AZABlAF0AOgA6AE8AcABlAG4ALAAgAFsAUwB5AHMAdABlAG0ALgBJAE8ALgBGAGkAbABlAEEAYwBjAGUAcwBzAF0AOgA6AFIAZQBhAGQAKQA7AAoAWwBTAHkAcwB0AGUAbQAuAEIAaQB0AEMAbwBuAHYAZQByAHQAZQByAF0AOgA6AFQAbwBTAHQAcgBpAG4AZwAoACQAcwBwAC4AQwBvAG0AcAB1AHQAZQBIAGEAcwBoACgAJABmAHAAKQApAC4AUgBlAHAAbABhAGMAZQAoACIALQAiACwAIAAiACIAKQAuAFQAbwBMAG8AdwBlAHIAKAApADsACgAkAGYAcAAuAEQAaQBzAHAAbwBzAGUAKAApADsACgB9AAoARQBsAAAXwBtAGEAcwB0AGUAcgBfAG4AbwBlAHQAbABfAHcAaQBuADIAMAAxADIALgB6AGkAcAAiACkACgB7AAoAVwByAGkAdABlAC0ASABvAHMAdAAgACIAMwAiADsACgB9AAoARQBsAHMAZQAKAHsACgBXAHIAaQB0AGUALQBIAG8AcwB0ACAAIgAxACIAOwAKAH0A'] <52.36.54.12> WINRM RESULT u'' <52.36.54.12> WINRM STDOUT 22d5bc624a1e6d0ce1162a60ab04f49c70ac3dc8 <52.36.54.12> WINRM STDERR <52.36.54.12> FETCH ""C:\apps\ult_master_noc_win2012.zip"" TO ""/home/ec2-user/files"" <52.36.54.12> WINRM FETCH ""C:\apps\ult_master_noc_win2012.zip"" to ""/home/ec2-user/files"" (offset=0) <52.36.54.12> WINRM EXEC 'PowerShell' ['-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted', '-EncodedCommand', 'UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgBJAGYAIAAoAFQAZQBzAHQALQBQAGEAdABoACAALQBQAGEAdABoAFQAeQBwAGUAIABMAGUAYQBmACAAIgBDADoAXABjAGEAcABzAGUAbgB0AGEAXAB1AGwAdAByAGEAdwByAGEAcABfAG0AYQBzAHQAZQByAFBuADIAMAAxADIALgB6AGkAcAAiACkAOwAKACQAcwB0AHIAZQBhAG0ALgBTAGUAZQBrACgAMAAsACAAWwBTAHkAcwB0AGUAbQAuAEkATwAuAFMAZQBlAGsATwByAGkAZwBpAG4AXQA6ADoAQgBlAGcAaQBuACkAIAB8ACAATwB1AHQALQBOAHUAbABsADsACgAkAGIAdQBmAGYAZQByACAAPQAgAE4AZQB3AC0ATwBiAGoAZQBjAHQAIABCAHkAdABlAFsAXQAgADUAMgA0ADIAOAA4ADsACgAkAGIAeQB0AGUAcwBSAGUAYQBkACAAPQAgACQAcwB0AHIAZQBhAG0ALgBSAGUAYQBkACgAJABiAHUAZgBmAGUAcgAsACAAMAAsACAANQAyADQAMgA4ADgAKQA7AAoAJABiAHkAdABlAHMAIAA9ACAAJABiAHUAZgBmAGUAcgBbADAALgAuACgAJABiAHkAdABlAHMAUgBlAGEAZAAtADEAKQBdADsACgBbAFMAeQBzAHQAZQBtAC4AQwBvAG4AdgBlAHIAdABdADoAOgBUAG8AQgBhAHMAZQA2ADQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwApADsACgAkAHMAdAByAGUAYQBtAC4AMgAuAHoAaQBwACIAKQAKAHsACgBXAHIAaQB0AGUALQBIAG8AcwB0ACAAIgBbAEQASQBSAF0AIgA7AAoAfQAKAEUAbABzAGUACgB7AAoAVwByAGkAdABlAC0ARQByAHIAbwByACAAIgBDADoAXABjAGEAcABzAGUAbgB0AGEAXAB1AGwAdAByAGEAdwByAGEAcABfAG0AYQBzAHQAZQByAF8AbgBvAGUAdABsAF8AdwBpAG4AMgAwADEAMgAuAHoAaQBwACAAZABvAGUAcwAgAG4AbwB0ACAAZQB4AGkAcwB0ACIAOwAKAEUAeABpAHQAIAAxADsACgB9AA=='] <52.36.54.12> WINRM RESULT u'' <52.36.54.12> WINRM STDOUT [Long encoded part deleted] <52.36.54.12> WINRM STDERR <52.36.54.12> WINRM CLOSE SHELL: E5D85769-78F7-40DF-9A5F-B98304A49455 changed: [52.36.10.20] => {""changed"": true, ""checksum"": null, ""dest"": ""/home/ec2-user/files"", ""invocation"": {""module_args"": {""dest"": ""/home/ec2-user/files"", ""fail_on_missing"": ""yes"", ""flat"": ""yes"", ""src"": ""C:\\apps\\ult_master_noc_win2012.zip""}, ""module_name"": ""fetch""}, ""md5sum"": null, ""remote_checksum"": ""22d5bc624a1e6d0ce1162a60ab04f49c70ac3dc8"", ""remote_md5sum"": null} TASK [Output the name of the archive] ****************************************** task path: /build_scripts/build_noetl_cust_win.yml:208 ok: [52.36.10.20] => { ""msg"": ""archive - ult_master_noc_win2012.zip"" } ``` ",True,"Fetch Silently Fails To Fetch File - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME fetch module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from Amazon Linux 2016.03 release Target is Windows Server 2012 R2 (recent patches applied) ##### SUMMARY Fetching an 80mbyte file from Windows Server silently fails. Ansible does not report an error, but the file never arrives. (Fetching the same file from a remote Linux server works fine, btw) ##### STEPS TO REPRODUCE ``` - name: fetch deliverable fetch: src=""{{ topleveldir }}\ult_{{ hostvars['localhost']['version'] }}_{{ hostvars['localhost']['soft'] }}_{{ hostvars['localhost']['target'] }}.zip"" dest=/home/ec2-user/files flat=yes fail_on_missing=yes - name: Output the name of the archive debug: msg=""archive - ult_{{ hostvars['localhost']['version'] }}_{{ hostvars['localhost']['soft'] }}_{{ hostvars['localhost']['target'] }}.zip"" ``` ##### EXPECTED RESULTS I expect the file to be placed on the control machine. ##### ACTUAL RESULTS The file never arrives but the results don't indicate a problem (afaict). ``` TASK [fetch deliverable] ******************************************************* task path: /build_scripts/build_noc_cust_win.yml:204 <52.36.54.12> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 52.36.54.12 <52.36.54.12> WINRM CONNECT: transport=ssl endpoint=https://52.36.54.12:5986/wsman <52.36.54.12> EXEC Set-StrictMode -Version Latest If (Test-Path -PathType Leaf ""C:\apps\ult_master_noc_win2012.zip"") { $sp = new-object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider; $fp = [System.IO.File]::Open(""C:\apps\ult_master_noc_win2012.zip"", [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read); [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace(""-"", """").ToLower(); $fp.Dispose(); } ElseIf (Test-Path -PathType Container ""C:\apps\ult_master_noc_win2012.zip"") { Write-Host ""3""; } Else { Write-Host ""1""; } <52.36.54.12> WINRM OPEN SHELL: E5D85769-78F7-40DF-9A5F-B98304A49455 <52.36.54.12> WINRM EXEC u'PowerShell' [u'-NoProfile', u'-NonInteractive', u'-ExecutionPolicy', u'Unrestricted', u'-EncodedCommand', u'UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgBJAGYAIAAoAFQAZQBzAHQALQBQAGEAdABoACAALQBQAGEAdABoAFQAeQBwAGUAIABMAGUAYQBmACAAIgBDADoAXABjAGEAcABzAGUAbgB0AGEAXAB1AGwAdAByAGEAdwByAGEAcABfAG0AYQBzAHQAZQByAF8AbgBvAGUAdABsAF8AdwBpAG4AMgAwADEAMgAuAHoAaQBwACIAKQAKAHsACgAkAHMAcAAgAD0AIABuAGUAdwAtAG8AYgBqAGUAYwB0ACAALQBUAHkAcABlAE4AYQBtAGUAIABTAHkAcwB0AGUAbQAuAFMAZQBjAHUAcgBpAHQAeQAuAEMAcgB5AHAAdABvAGcAcgBhAHAAaAB5AC4AUwBIAEEAMQBDAHIAeQBwAHQAbwBTAGUAcgB2AGkAYwBlAFAAcgBvAHYAaQBkAGUAcgA7AAoAJABmAHAAIAA9ACAAWwBTAHkAcwB0AGUAbQAuAEkATwAuAEYAaQBsAGUAXQA6dABhAFwAdQBsAHQAcgBhAHcAcgBhAHAAXwBtAGEAcwB0AGUAcgBfAG4AbwBlAHQAbABfAHcAaQBuADIAMAAxADIALgB6AGkAcAAiACwAIABbAFMAeQBzAHQAZQBtAC4ASQBPAC4ARgBpAGwAZQBtAG8AZABlAF0AOgA6AE8AcABlAG4ALAAgAFsAUwB5AHMAdABlAG0ALgBJAE8ALgBGAGkAbABlAEEAYwBjAGUAcwBzAF0AOgA6AFIAZQBhAGQAKQA7AAoAWwBTAHkAcwB0AGUAbQAuAEIAaQB0AEMAbwBuAHYAZQByAHQAZQByAF0AOgA6AFQAbwBTAHQAcgBpAG4AZwAoACQAcwBwAC4AQwBvAG0AcAB1AHQAZQBIAGEAcwBoACgAJABmAHAAKQApAC4AUgBlAHAAbABhAGMAZQAoACIALQAiACwAIAAiACIAKQAuAFQAbwBMAG8AdwBlAHIAKAApADsACgAkAGYAcAAuAEQAaQBzAHAAbwBzAGUAKAApADsACgB9AAoARQBsAAAXwBtAGEAcwB0AGUAcgBfAG4AbwBlAHQAbABfAHcAaQBuADIAMAAxADIALgB6AGkAcAAiACkACgB7AAoAVwByAGkAdABlAC0ASABvAHMAdAAgACIAMwAiADsACgB9AAoARQBsAHMAZQAKAHsACgBXAHIAaQB0AGUALQBIAG8AcwB0ACAAIgAxACIAOwAKAH0A'] <52.36.54.12> WINRM RESULT u'' <52.36.54.12> WINRM STDOUT 22d5bc624a1e6d0ce1162a60ab04f49c70ac3dc8 <52.36.54.12> WINRM STDERR <52.36.54.12> FETCH ""C:\apps\ult_master_noc_win2012.zip"" TO ""/home/ec2-user/files"" <52.36.54.12> WINRM FETCH ""C:\apps\ult_master_noc_win2012.zip"" to ""/home/ec2-user/files"" (offset=0) <52.36.54.12> WINRM EXEC 'PowerShell' ['-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted', '-EncodedCommand', 'UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgBJAGYAIAAoAFQAZQBzAHQALQBQAGEAdABoACAALQBQAGEAdABoAFQAeQBwAGUAIABMAGUAYQBmACAAIgBDADoAXABjAGEAcABzAGUAbgB0AGEAXAB1AGwAdAByAGEAdwByAGEAcABfAG0AYQBzAHQAZQByAFBuADIAMAAxADIALgB6AGkAcAAiACkAOwAKACQAcwB0AHIAZQBhAG0ALgBTAGUAZQBrACgAMAAsACAAWwBTAHkAcwB0AGUAbQAuAEkATwAuAFMAZQBlAGsATwByAGkAZwBpAG4AXQA6ADoAQgBlAGcAaQBuACkAIAB8ACAATwB1AHQALQBOAHUAbABsADsACgAkAGIAdQBmAGYAZQByACAAPQAgAE4AZQB3AC0ATwBiAGoAZQBjAHQAIABCAHkAdABlAFsAXQAgADUAMgA0ADIAOAA4ADsACgAkAGIAeQB0AGUAcwBSAGUAYQBkACAAPQAgACQAcwB0AHIAZQBhAG0ALgBSAGUAYQBkACgAJABiAHUAZgBmAGUAcgAsACAAMAAsACAANQAyADQAMgA4ADgAKQA7AAoAJABiAHkAdABlAHMAIAA9ACAAJABiAHUAZgBmAGUAcgBbADAALgAuACgAJABiAHkAdABlAHMAUgBlAGEAZAAtADEAKQBdADsACgBbAFMAeQBzAHQAZQBtAC4AQwBvAG4AdgBlAHIAdABdADoAOgBUAG8AQgBhAHMAZQA2ADQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwApADsACgAkAHMAdAByAGUAYQBtAC4AMgAuAHoAaQBwACIAKQAKAHsACgBXAHIAaQB0AGUALQBIAG8AcwB0ACAAIgBbAEQASQBSAF0AIgA7AAoAfQAKAEUAbABzAGUACgB7AAoAVwByAGkAdABlAC0ARQByAHIAbwByACAAIgBDADoAXABjAGEAcABzAGUAbgB0AGEAXAB1AGwAdAByAGEAdwByAGEAcABfAG0AYQBzAHQAZQByAF8AbgBvAGUAdABsAF8AdwBpAG4AMgAwADEAMgAuAHoAaQBwACAAZABvAGUAcwAgAG4AbwB0ACAAZQB4AGkAcwB0ACIAOwAKAEUAeABpAHQAIAAxADsACgB9AA=='] <52.36.54.12> WINRM RESULT u'' <52.36.54.12> WINRM STDOUT [Long encoded part deleted] <52.36.54.12> WINRM STDERR <52.36.54.12> WINRM CLOSE SHELL: E5D85769-78F7-40DF-9A5F-B98304A49455 changed: [52.36.10.20] => {""changed"": true, ""checksum"": null, ""dest"": ""/home/ec2-user/files"", ""invocation"": {""module_args"": {""dest"": ""/home/ec2-user/files"", ""fail_on_missing"": ""yes"", ""flat"": ""yes"", ""src"": ""C:\\apps\\ult_master_noc_win2012.zip""}, ""module_name"": ""fetch""}, ""md5sum"": null, ""remote_checksum"": ""22d5bc624a1e6d0ce1162a60ab04f49c70ac3dc8"", ""remote_md5sum"": null} TASK [Output the name of the archive] ****************************************** task path: /build_scripts/build_noetl_cust_win.yml:208 ok: [52.36.10.20] => { ""msg"": ""archive - ult_master_noc_win2012.zip"" } ``` ",1,fetch silently fails to fetch file issue type bug report component name fetch module ansible version ansible config file configured module search path default w o overrides configuration os environment running from amazon linux release target is windows server recent patches applied summary fetching an file from windows server silently fails ansible does not report an error but the file never arrives fetching the same file from a remote linux server works fine btw steps to reproduce name fetch deliverable fetch src topleveldir ult hostvars hostvars hostvars zip dest home user files flat yes fail on missing yes name output the name of the archive debug msg archive ult hostvars hostvars hostvars zip expected results i expect the file to be placed on the control machine actual results the file never arrives but the results don t indicate a problem afaict task task path build scripts build noc cust win yml establish winrm connection for user administrator on port to winrm connect transport ssl endpoint exec set strictmode version latest if test path pathtype leaf c apps ult master noc zip sp new object typename system security cryptography fp open c apps ult master noc zip open read tostring sp computehash fp replace tolower fp dispose elseif test path pathtype container c apps ult master noc zip write host else write host winrm open shell winrm exec u powershell winrm result u winrm stdout winrm stderr fetch c apps ult master noc zip to home user files winrm fetch c apps ult master noc zip to home user files offset winrm exec powershell winrm result u winrm stdout winrm stderr winrm close shell changed changed true checksum null dest home user files invocation module args dest home user files fail on missing yes flat yes src c apps ult master noc zip module name fetch null remote checksum remote null task task path build scripts build noetl cust win yml ok msg archive ult master noc zip ,1 1850,6577390846.0,IssuesEvent,2017-09-12 00:35:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ec2_vpc_net always returns ""changed"" when state=present",affects_2.2 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc_net ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 56ba10365c) last updated 2016/05/05 17:12:03 (GMT +550) ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY ec2_vpc_net always sets ""changed"" when state=present, because the code calls update_vpc_tags() and sets changed if `tags is not None or name is not None`, regardless of whether the vpc exists or not. So if you specify name and cidr_block, the tags are always updated. cc: @defionscode ##### STEPS TO REPRODUCE ``` - ec2_vpc_net: state: present name: ExampleVPC cidr_block: 192.0.2.0/24 register: vpc - fail: msg=""I created a VPC"" when: vpc.changed ``` ",True,"ec2_vpc_net always returns ""changed"" when state=present - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc_net ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 56ba10365c) last updated 2016/05/05 17:12:03 (GMT +550) ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY ec2_vpc_net always sets ""changed"" when state=present, because the code calls update_vpc_tags() and sets changed if `tags is not None or name is not None`, regardless of whether the vpc exists or not. So if you specify name and cidr_block, the tags are always updated. cc: @defionscode ##### STEPS TO REPRODUCE ``` - ec2_vpc_net: state: present name: ExampleVPC cidr_block: 192.0.2.0/24 register: vpc - fail: msg=""I created a VPC"" when: vpc.changed ``` ",1, vpc net always returns changed when state present issue type bug report component name vpc net ansible version ansible devel last updated gmt configuration default os environment n a summary vpc net always sets changed when state present because the code calls update vpc tags and sets changed if tags is not none or name is not none regardless of whether the vpc exists or not so if you specify name and cidr block the tags are always updated cc defionscode steps to reproduce vpc net state present name examplevpc cidr block register vpc fail msg i created a vpc when vpc changed ,1 1115,4989020312.0,IssuesEvent,2016-12-08 10:23:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt_repository: does not understand arch= option,affects_2.1 feature_idea waiting_on_maintainer,"I'm my systems I have a lot of repositories with an arch-qualifier. e.g.: ``` deb [arch=ppc64el] http://ftp.debian.org/debian/ sid main deb [arch=amd64,i386] http://dl.google.com/linux/talkplugin/deb/ stable main ``` apt_repository does not understand them and tries to re-add that repos, actually duplicating them. Moreover I'd like to be able to specify these option in ansible. Currently available options are (from sources.list(5): ``` · arch=arch1,arch2,... can be used to specify for which architectures information should be downloaded. If this option is not set all architectures defined by the APT::Architectures option will be downloaded. · arch+=arch1,arch2,... and arch-=arch1,arch2,... which can be used to add/remove architectures from the set which will be downloaded. · trusted=yes can be set to indicate that packages from this source are always authenticated even if the Release file is not signed or the signature can't be checked. This disables parts of apt-secure(8) and should therefore only be used in a local and trusted context. trusted=no is the opposite which handles even correctly authenticated sources as not authenticated. ``` these options are surrounded by square brackets, and can or cannot have leading and trailing spaces between the options and the brackets. ",True,"apt_repository: does not understand arch= option - I'm my systems I have a lot of repositories with an arch-qualifier. e.g.: ``` deb [arch=ppc64el] http://ftp.debian.org/debian/ sid main deb [arch=amd64,i386] http://dl.google.com/linux/talkplugin/deb/ stable main ``` apt_repository does not understand them and tries to re-add that repos, actually duplicating them. Moreover I'd like to be able to specify these option in ansible. Currently available options are (from sources.list(5): ``` · arch=arch1,arch2,... can be used to specify for which architectures information should be downloaded. If this option is not set all architectures defined by the APT::Architectures option will be downloaded. · arch+=arch1,arch2,... and arch-=arch1,arch2,... which can be used to add/remove architectures from the set which will be downloaded. · trusted=yes can be set to indicate that packages from this source are always authenticated even if the Release file is not signed or the signature can't be checked. This disables parts of apt-secure(8) and should therefore only be used in a local and trusted context. trusted=no is the opposite which handles even correctly authenticated sources as not authenticated. ``` these options are surrounded by square brackets, and can or cannot have leading and trailing spaces between the options and the brackets. ",1,apt repository does not understand arch option i m my systems i have a lot of repositories with an arch qualifier e g deb sid main deb stable main apt repository does not understand them and tries to re add that repos actually duplicating them moreover i d like to be able to specify these option in ansible currently available options are from sources list · arch can be used to specify for which architectures information should be downloaded if this option is not set all architectures defined by the apt architectures option will be downloaded · arch and arch which can be used to add remove architectures from the set which will be downloaded · trusted yes can be set to indicate that packages from this source are always authenticated even if the release file is not signed or the signature can t be checked this disables parts of apt secure and should therefore only be used in a local and trusted context trusted no is the opposite which handles even correctly authenticated sources as not authenticated these options are surrounded by square brackets and can or cannot have leading and trailing spaces between the options and the brackets ,1 1112,4988882622.0,IssuesEvent,2016-12-08 09:57:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,EC2 module parameter `id` is undocumented.,affects_2.3 aws cloud docs_report waiting_on_maintainer,"The `id` parameter was added at https://github.com/ansible/ansible/pull/2421. For some reason the parameter's documentation was removed later at https://github.com/ansible/ansible-modules-core/commit/c6b0d469acbb1a1b0508bacaedc5456eb5e9be83#diff-9667dfcde0b7854855c94acb534b156aL31 Can the documentation be restored? I haven't seen notices about deprecation and the functionality is there. ",True,"EC2 module parameter `id` is undocumented. - The `id` parameter was added at https://github.com/ansible/ansible/pull/2421. For some reason the parameter's documentation was removed later at https://github.com/ansible/ansible-modules-core/commit/c6b0d469acbb1a1b0508bacaedc5456eb5e9be83#diff-9667dfcde0b7854855c94acb534b156aL31 Can the documentation be restored? I haven't seen notices about deprecation and the functionality is there. ",1, module parameter id is undocumented the id parameter was added at for some reason the parameter s documentation was removed later at can the documentation be restored i haven t seen notices about deprecation and the functionality is there ,1 1658,6574047651.0,IssuesEvent,2017-09-11 11:14:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Elasticache: Use of cache security groups is not permitted in this API version for your account,affects_2.2 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME elasticache ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] hostfile = hosts ##### OS / ENVIRONMENT macOS Sierra ##### SUMMARY I cannot create a cluster elasticache like the example http://docs.ansible.com/ansible/elasticache_module.html ##### STEPS TO REPRODUCE ansible-playbook example.yml ``` - elasticache: name: ""test-please-delete"" state: present engine: memcached cache_engine_version: 1.4.14 node_type: cache.m1.small num_nodes: 1 cache_port: 11211 cache_security_groups: - default region: us-east-1 zone: us-east-1d ``` ##### EXPECTED RESULTS cluster created ##### ACTUAL RESULTS Use of cache security groups is not permitted in this API version for your account ``` ``` ",True,"Elasticache: Use of cache security groups is not permitted in this API version for your account - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME elasticache ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] hostfile = hosts ##### OS / ENVIRONMENT macOS Sierra ##### SUMMARY I cannot create a cluster elasticache like the example http://docs.ansible.com/ansible/elasticache_module.html ##### STEPS TO REPRODUCE ansible-playbook example.yml ``` - elasticache: name: ""test-please-delete"" state: present engine: memcached cache_engine_version: 1.4.14 node_type: cache.m1.small num_nodes: 1 cache_port: 11211 cache_security_groups: - default region: us-east-1 zone: us-east-1d ``` ##### EXPECTED RESULTS cluster created ##### ACTUAL RESULTS Use of cache security groups is not permitted in this API version for your account ``` ``` ",1,elasticache use of cache security groups is not permitted in this api version for your account issue type bug report component name elasticache ansible version ansible config file configured module search path default w o overrides configuration hostfile hosts mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific macos sierra summary i cannot create a cluster elasticache like the example steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible playbook example yml elasticache name test please delete state present engine memcached cache engine version node type cache small num nodes cache port cache security groups default region us east zone us east expected results cluster created actual results use of cache security groups is not permitted in this api version for your account ,1 1672,6574093878.0,IssuesEvent,2017-09-11 11:27:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container: unable to deal with image IDs,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_container` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to create a container for an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ ansible -m docker_container -a 'name=foo image=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 command=true' localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_container -a 'name=foo image=alpine command=true' localhost`. ``` localhost | SUCCESS => { ""ansible_facts"": {}, ""changed"": true } ``` ##### ACTUAL RESULTS Instead Ansible tries to pull the image by its ID and naturally fails at that. ``` localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Error pulling sha256 - code: None message: Error: image library/sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 not found"" } ``` ",True,"docker_container: unable to deal with image IDs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_container` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to create a container for an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ ansible -m docker_container -a 'name=foo image=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 command=true' localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_container -a 'name=foo image=alpine command=true' localhost`. ``` localhost | SUCCESS => { ""ansible_facts"": {}, ""changed"": true } ``` ##### ACTUAL RESULTS Instead Ansible tries to pull the image by its ID and naturally fails at that. ``` localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Error pulling sha256 - code: None message: Error: image library/sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 not found"" } ``` ",1,docker container unable to deal with image ids issue type bug report component name docker container ansible version ansible config file home schwarz code infrastructure ansible cfg configured module search path default w o overrides configuration n a os environment debian gnu linux summary docker allows addressing images by id ansible should do the same otherwise it s impossible to create a container for an unnamed image steps to reproduce sh docker pull alpine docker inspect format id alpine ansible m docker container a name foo image command true localhost expected results the output should be the same as from ansible m docker container a name foo image alpine command true localhost localhost success ansible facts changed true actual results instead ansible tries to pull the image by its id and naturally fails at that localhost failed changed false failed true msg error pulling code none message error image library not found ,1 1862,6577413684.0,IssuesEvent,2017-09-12 00:44:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"""service: enable=yes/no"" with spanish CentOS6 servers is broken",affects_2.0 bug_report docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME service core module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 (but I wouldn't discard other versions) ``` ##### OS / ENVIRONMENT CentOS 6.7 (at least) ##### SUMMARY On CentOS6 at least, a spanish translator did a bad translating in some error messages returned to the command line by the chkconfig tool which break the ""service enable=yes / no"" option in Ansible. It 's not a direct Ansible issue but Ansible is affected. ##### STEPS TO REPRODUCE ``` - name: Start myservice and enable it service: name=myservice state=started enabled=yes ``` ##### EXPECTED RESULTS ``` TASK [role : Start myservice and enable it ] *********************************************** ok: [myservice] ``` ##### ACTUAL RESULTS ``` TASK [role : Start myservice and enable it ] *********************************************** fatal: [myservice]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""service myservice does not support chkconfig""} ``` ##### WHY Line https://github.com/ansible/ansible-modules-core/blob/devel/system/service.py#L702 waits a concrete answer to do the regexp but, in Spanish, that line is bad translated changing the ""--add"" statement to ""--añada"" (doesn't make sense), so you see something like: ""El servicio myservice soporta chkconfig, pero no está registrado (ejecute \n'chkconfig -- añada myservice')\n"" the sentence ""chkconfig --add"" doesn't appear there and that breaks the ""if"". So the ""enable"" feature is broken. ##### WORKAROUND `export LANG=""en""` in the terminal, before starting to work with any sofware which uses Ansible inside. ",True,"""service: enable=yes/no"" with spanish CentOS6 servers is broken - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME service core module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 (but I wouldn't discard other versions) ``` ##### OS / ENVIRONMENT CentOS 6.7 (at least) ##### SUMMARY On CentOS6 at least, a spanish translator did a bad translating in some error messages returned to the command line by the chkconfig tool which break the ""service enable=yes / no"" option in Ansible. It 's not a direct Ansible issue but Ansible is affected. ##### STEPS TO REPRODUCE ``` - name: Start myservice and enable it service: name=myservice state=started enabled=yes ``` ##### EXPECTED RESULTS ``` TASK [role : Start myservice and enable it ] *********************************************** ok: [myservice] ``` ##### ACTUAL RESULTS ``` TASK [role : Start myservice and enable it ] *********************************************** fatal: [myservice]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""service myservice does not support chkconfig""} ``` ##### WHY Line https://github.com/ansible/ansible-modules-core/blob/devel/system/service.py#L702 waits a concrete answer to do the regexp but, in Spanish, that line is bad translated changing the ""--add"" statement to ""--añada"" (doesn't make sense), so you see something like: ""El servicio myservice soporta chkconfig, pero no está registrado (ejecute \n'chkconfig -- añada myservice')\n"" the sentence ""chkconfig --add"" doesn't appear there and that breaks the ""if"". So the ""enable"" feature is broken. ##### WORKAROUND `export LANG=""en""` in the terminal, before starting to work with any sofware which uses Ansible inside. ",1, service enable yes no with spanish servers is broken issue type documentation report component name service core module ansible version ansible but i wouldn t discard other versions os environment centos at least summary on at least a spanish translator did a bad translating in some error messages returned to the command line by the chkconfig tool which break the service enable yes no option in ansible it s not a direct ansible issue but ansible is affected steps to reproduce name start myservice and enable it service name myservice state started enabled yes expected results task ok actual results task fatal failed changed false failed true msg service myservice does not support chkconfig why line waits a concrete answer to do the regexp but in spanish that line is bad translated changing the add statement to añada doesn t make sense so you see something like el servicio myservice soporta chkconfig pero no está registrado ejecute n chkconfig añada myservice n the sentence chkconfig add doesn t appear there and that breaks the if so the enable feature is broken workaround export lang en in the terminal before starting to work with any sofware which uses ansible inside ,1 1727,6574810678.0,IssuesEvent,2017-09-11 14:09:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Unexpected token \r\n when running script module on windows target with environment var set on playbook,affects_2.3 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME commands/script.py ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel fda933723c) last updated 2016/10/25 04:47:56 (GMT -200) lib/ansible/modules/core: (detached HEAD c51ced56cc) last updated 2016/10/25 04:48:03 (GMT -200) lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/25 04:48:03 (GMT -200) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Only windows connection settings: ansible_port= 5986 ansible_connection= winrm ansible_winrm_transport= ntlm ansible_winrm_server_cert_validation= ignore ansible_user= Administrator ansible_password= ##### OS / ENVIRONMENT Running from checkout on CentOS Linux release 7.2.1511 (Core), targeting a fully updated windows Standard Server 2016 host (Full Desktop Experience Installed) ##### SUMMARY Running a powershell script using script modules throws me an error about ""Unexpected token \r\n'"" when I'm also setting environment variables on the playbook. Ansible 2.2-stable also throws the error. On Ansible 2.1 it works fine. Or at least it runs. Apparently I am not being able to actually use the envvar inside the script, but that may be something wrong I am doing. (in my particular case I am using the playbook envvar for some local actions run after that, not for using inside the script. ##### STEPS TO REPRODUCE Stripped down playbook: ``` --- - hosts: windows gather_facts: false environment: TESTENV: blah tasks: - name: test script script: files/test.ps1 creates=""c:\\test.txt"" ``` ##### EXPECTED RESULTS Script run sucessfully generating a file at c:\test.txt. ##### ACTUAL RESULTS ``` [root@ansiblelab-u01 ansible]# ansible-playbook -i inv test.yml -l vmwarelab -vvvv [4/1867] No config file found; using defaults Loading callback plugin default of type stdout, v2.0 from /root/src/ansible/lib/ansible/plugins/callback/__init__.py PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [windows] ***************************************************************** TASK [test script] ************************************************************* task path: /root/ansible/test.yml:7 ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO my.ip.address EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1477381702.07-161232195129995"").FullName | Write-Host -Separator ''; EXEC Set-StrictMode -Version Latest If (Test-Path ""c:\test.txt"") { $res = 0; } Else { $res = 1; } Write-Host ""$res""; Exit $res; PUT ""/root/ansible/files/test.ps1"" TO ""C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1"" EXEC $env:TESTENV='blah' 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1' EXEC Set-StrictMode -Version Latest Remove-Item ""C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995"" -Force -Recurse; fatal: [my.ip.address]: FAILED! => { ""changed"": true, ""failed"": true, ""invocation"": { ""module_args"": { ""_raw_params"": ""files/test.ps1"", ""creates"": ""c:\\test.txt"" }, ""module_name"": ""script"" }, ""rc"": 1, ""stderr"": ""At line:1 char:21\r\n+ ... TENV='blah' 'C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-14 ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~\r\nUnexpected token ''C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-1477381702.07-161232195129995\\test.ps1'' in \r\nexpression or statement.\r\n+ CategoryInfo : ParserErro r: (:) [], ParentContainsErrorRecordException\r\n+ FullyQualifiedErrorId : UnexpectedToken\r\n"", ""stdout"": """", ""stdout_lines"": [] } to retry, use: --limit @/root/ansible/test.retry PLAY RECAP ********************************************************************* my.ip.address : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"Unexpected token \r\n when running script module on windows target with environment var set on playbook - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME commands/script.py ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel fda933723c) last updated 2016/10/25 04:47:56 (GMT -200) lib/ansible/modules/core: (detached HEAD c51ced56cc) last updated 2016/10/25 04:48:03 (GMT -200) lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/25 04:48:03 (GMT -200) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Only windows connection settings: ansible_port= 5986 ansible_connection= winrm ansible_winrm_transport= ntlm ansible_winrm_server_cert_validation= ignore ansible_user= Administrator ansible_password= ##### OS / ENVIRONMENT Running from checkout on CentOS Linux release 7.2.1511 (Core), targeting a fully updated windows Standard Server 2016 host (Full Desktop Experience Installed) ##### SUMMARY Running a powershell script using script modules throws me an error about ""Unexpected token \r\n'"" when I'm also setting environment variables on the playbook. Ansible 2.2-stable also throws the error. On Ansible 2.1 it works fine. Or at least it runs. Apparently I am not being able to actually use the envvar inside the script, but that may be something wrong I am doing. (in my particular case I am using the playbook envvar for some local actions run after that, not for using inside the script. ##### STEPS TO REPRODUCE Stripped down playbook: ``` --- - hosts: windows gather_facts: false environment: TESTENV: blah tasks: - name: test script script: files/test.ps1 creates=""c:\\test.txt"" ``` ##### EXPECTED RESULTS Script run sucessfully generating a file at c:\test.txt. ##### ACTUAL RESULTS ``` [root@ansiblelab-u01 ansible]# ansible-playbook -i inv test.yml -l vmwarelab -vvvv [4/1867] No config file found; using defaults Loading callback plugin default of type stdout, v2.0 from /root/src/ansible/lib/ansible/plugins/callback/__init__.py PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [windows] ***************************************************************** TASK [test script] ************************************************************* task path: /root/ansible/test.yml:7 ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO my.ip.address EXEC Set-StrictMode -Version Latest (New-Item -Type Directory -Path $env:temp -Name ""ansible-tmp-1477381702.07-161232195129995"").FullName | Write-Host -Separator ''; EXEC Set-StrictMode -Version Latest If (Test-Path ""c:\test.txt"") { $res = 0; } Else { $res = 1; } Write-Host ""$res""; Exit $res; PUT ""/root/ansible/files/test.ps1"" TO ""C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1"" EXEC $env:TESTENV='blah' 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1' EXEC Set-StrictMode -Version Latest Remove-Item ""C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995"" -Force -Recurse; fatal: [my.ip.address]: FAILED! => { ""changed"": true, ""failed"": true, ""invocation"": { ""module_args"": { ""_raw_params"": ""files/test.ps1"", ""creates"": ""c:\\test.txt"" }, ""module_name"": ""script"" }, ""rc"": 1, ""stderr"": ""At line:1 char:21\r\n+ ... TENV='blah' 'C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-14 ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~\r\nUnexpected token ''C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-1477381702.07-161232195129995\\test.ps1'' in \r\nexpression or statement.\r\n+ CategoryInfo : ParserErro r: (:) [], ParentContainsErrorRecordException\r\n+ FullyQualifiedErrorId : UnexpectedToken\r\n"", ""stdout"": """", ""stdout_lines"": [] } to retry, use: --limit @/root/ansible/test.retry PLAY RECAP ********************************************************************* my.ip.address : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,unexpected token r n when running script module on windows target with environment var set on playbook issue type bug report component name commands script py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables only windows connection settings ansible port ansible connection winrm ansible winrm transport ntlm ansible winrm server cert validation ignore ansible user administrator ansible password os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running from checkout on centos linux release core targeting a fully updated windows standard server host full desktop experience installed summary running a powershell script using script modules throws me an error about unexpected token r n when i m also setting environment variables on the playbook ansible stable also throws the error on ansible it works fine or at least it runs apparently i am not being able to actually use the envvar inside the script but that may be something wrong i am doing in my particular case i am using the playbook envvar for some local actions run after that not for using inside the script steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used stripped down playbook hosts windows gather facts false environment testenv blah tasks name test script script files test creates c test txt expected results script run sucessfully generating a file at c test txt actual results ansible playbook i inv test yml l vmwarelab vvvv no config file found using defaults loading callback plugin default of type stdout from root src ansible lib ansible plugins callback init py playbook test yml plays in test yml play task task path root ansible test yml establish winrm connection for user administrator on port to my ip address exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator exec set strictmode version latest if test path c test txt res else res write host res exit res put root ansible files test to c users administrator appdata local temp ansible tmp test exec env testenv blah c users administrator appdata local temp ansible tmp test exec set strictmode version latest remove item c users administrator appdata local temp ansible tmp force recurse fatal failed changed true failed true invocation module args raw params files test creates c test txt module name script rc stderr at line char r n tenv blah c users administrator appdata local temp ansible tmp r n r nunexpected token c users administrator appdata local temp ansible tmp test in r nexpression or statement r n categoryinfo parsererro r parentcontainserrorrecordexception r n fullyqualifiederrorid unexpectedtoken r n stdout stdout lines to retry use limit root ansible test retry play recap my ip address ok changed unreachable failed ,1 1060,4876982824.0,IssuesEvent,2016-11-16 14:30:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,reopened,ios_facts: `dir all-filesystems | include Directory`not supported on all devices,affects_2.2 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100) lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100) lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT 3750 flash:c3750-advipservicesk9-mz.122-44.SE4.bin"" WS-C3750-24PS-S ##### SUMMARY ogenstad: > `dir all-filesystems | include Directory` this is not a valid command on all ios devices. > If it’s used in the ios_facts module there needs to be some checks to catch those errors. > I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work. > I.e. not use `gather_subset: all` Thanks to @ben-cirrus (from networktocode Slack) for this bug report ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS No backtrace, facts returned ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file PLAYBOOK: get_ios_facts.yml **************************************************** 1 plays in get_ios_facts.yml PLAY [lab,] ******************************************************************** TASK [ios_facts] *************************************************************** task path: /root/napalm-testing/get_ios_facts.yml:14 Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" && echo ansible-tmp-1473154099.74-15933157338277=""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" ) && sleep 0' PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 455, in main() File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 437, in main runner.run() File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 163, in run File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 88, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py"", line 66, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py"", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory ^ % Invalid input detected at '^' marker. NSW-CHQ-SW-LAB# fatal: [labswitch]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_facts"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 455, in \n main()\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 437, in main\n runner.run()\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 163, in run\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 88, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 66, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @get_ios_facts.retry PLAY RECAP ********************************************************************* labswitch : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"ios_facts: `dir all-filesystems | include Directory`not supported on all devices - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100) lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100) lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT 3750 flash:c3750-advipservicesk9-mz.122-44.SE4.bin"" WS-C3750-24PS-S ##### SUMMARY ogenstad: > `dir all-filesystems | include Directory` this is not a valid command on all ios devices. > If it’s used in the ios_facts module there needs to be some checks to catch those errors. > I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work. > I.e. not use `gather_subset: all` Thanks to @ben-cirrus (from networktocode Slack) for this bug report ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS No backtrace, facts returned ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file PLAYBOOK: get_ios_facts.yml **************************************************** 1 plays in get_ios_facts.yml PLAY [lab,] ******************************************************************** TASK [ios_facts] *************************************************************** task path: /root/napalm-testing/get_ios_facts.yml:14 Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" && echo ansible-tmp-1473154099.74-15933157338277=""` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `"" ) && sleep 0' PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 455, in main() File ""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py"", line 437, in main runner.run() File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 163, in run File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py"", line 88, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py"", line 66, in run_commands File ""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py"", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory ^ % Invalid input detected at '^' marker. NSW-CHQ-SW-LAB# fatal: [labswitch]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ios_facts"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 455, in \n main()\n File \""/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\"", line 437, in main\n runner.run()\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 163, in run\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\"", line 88, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 66, in run_commands\n File \""/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } to retry, use: --limit @get_ios_facts.retry PLAY RECAP ********************************************************************* labswitch : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,ios facts dir all filesystems include directory not supported on all devices issue type bug report component name ios facts ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration os environment flash mz bin ws s summary ogenstad dir all filesystems include directory this is not a valid command on all ios devices if it’s used in the ios facts module there needs to be some checks to catch those errors i haven’t tested the ios facts module yet but if you can just disable that check i’m guessing it would work i e not use gather subset all thanks to ben cirrus from networktocode slack for this bug report steps to reproduce hosts hosts any errors fatal true connection local gather facts no vars cli host ip addr username user password password transport cli tasks ios facts provider cli gather subset all labswitch ip addr expected results no backtrace facts returned actual results using etc ansible ansible cfg as config file playbook get ios facts yml plays in get ios facts yml play task task path root napalm testing get ios facts yml using module file root ansible lib ansible modules core network ios ios facts py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpxzhjfd to root ansible tmp ansible tmp ios facts py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp ios facts py sleep exec bin sh c usr bin python root ansible tmp ansible tmp ios facts py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios facts py line in main file tmp ansible ansible module ios facts py line in main runner run file tmp ansible ansible modlib zip ansible module utils netcli py line in run file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands file tmp ansible ansible modlib zip ansible module utils ios py line in run commands file tmp ansible ansible modlib zip ansible module utils shell py line in execute ansible module utils network networkerror matched error in response dir all filesystems include directory invalid input detected at marker nsw chq sw lab fatal failed changed false failed true invocation module name ios facts module stderr traceback most recent call last n file tmp ansible ansible module ios facts py line in n main n file tmp ansible ansible module ios facts py line in main n runner run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands n file tmp ansible ansible modlib zip ansible module utils ios py line in run commands n file tmp ansible ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response dir all filesystems include directory r n r n invalid input detected at marker r n r nnsw chq sw lab n module stdout msg module failure to retry use limit get ios facts retry play recap labswitch ok changed unreachable failed ,1 1706,6574416453.0,IssuesEvent,2017-09-11 12:49:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mysql_user support for FUNCTION and PROCEDURE privileges,affects_2.2 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Currently only `TABLE` privileges can be manipulated with `mysql_user` Granting execute privileges on a mysql `FUNCTION` requires an SQL statement of the form ``` GRANT EXECUTE ON FUNCTION dbname.function_name TO 'user'; ``` Unfortunately if the `FUNCTION` keyword is included in `mysql_user` modules's `priv` parameter it is not recognized as a valid privilege level. Object types of `FUNCTION` and `PROCEDURE` are supported by `mysql` (http://dev.mysql.com/doc/refman/5.7/en/grant.html) and it would be nice if the `priv` parameter supported specifying 'object_type', so that task like the following could be executed ``` - mysql_user: user: db_user priv: FUNCTION dbname.function_name:EXECUTE state: present ```",True,"mysql_user support for FUNCTION and PROCEDURE privileges - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Currently only `TABLE` privileges can be manipulated with `mysql_user` Granting execute privileges on a mysql `FUNCTION` requires an SQL statement of the form ``` GRANT EXECUTE ON FUNCTION dbname.function_name TO 'user'; ``` Unfortunately if the `FUNCTION` keyword is included in `mysql_user` modules's `priv` parameter it is not recognized as a valid privilege level. Object types of `FUNCTION` and `PROCEDURE` are supported by `mysql` (http://dev.mysql.com/doc/refman/5.7/en/grant.html) and it would be nice if the `priv` parameter supported specifying 'object_type', so that task like the following could be executed ``` - mysql_user: user: db_user priv: FUNCTION dbname.function_name:EXECUTE state: present ```",1,mysql user support for function and procedure privileges issue type feature idea component name mysql user ansible version ansible configuration n a os environment n a summary currently only table privileges can be manipulated with mysql user granting execute privileges on a mysql function requires an sql statement of the form grant execute on function dbname function name to user unfortunately if the function keyword is included in mysql user modules s priv parameter it is not recognized as a valid privilege level object types of function and procedure are supported by mysql and it would be nice if the priv parameter supported specifying object type so that task like the following could be executed mysql user user db user priv function dbname function name execute state present ,1 1136,4998872107.0,IssuesEvent,2016-12-09 21:18:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ec2_elb_lb: Ansible didn't detect ""scheme"" change",affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /Users/rabe/Documents/Projects/IoT/Design/Devel/Env-Setup/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Changed the scheme from `internal` to `internet-facing`, and Ansible didn't detect that it had to change the state, i. e. perform some work. ##### STEPS TO REPRODUCE Originally had the following: ``` - name: Create Elastic Load Balancer for frontend ec2_elb_lb: name: ""{{ owner }}-elb-{{ env }}-fe"" region: ""{{ region }}"" subnets: - ""{{ mgmt_subnet_fe_1 }}"" - ""{{ mgmt_subnet_fe_2 }}"" state: present scheme: internal cross_az_load_balancing: yes listeners: - protocol: http instance_protocol: http load_balancer_port: 80 instance_port: 80 health_check: ping_protocol: http # options are http, https, ssl, tcp ping_port: 80 ping_path: ""/index.html"" # not required for tcp or ssl response_timeout: 5 # seconds interval: 30 # seconds unhealthy_threshold: 2 healthy_threshold: 10 instance_ids: - ""{{ mgmt_i_fe_1.tagged_instances[0].id }}"" - ""{{ mgmt_i_fe_2.tagged_instances[0].id }}"" tags: Name: ""{{ owner }}_elb_{{ env }}_fe"" Env: ""{{ owner }}_{{ env }}"" Tier: ""{{ owner }}_{{ env }}_frontend"" ``` Changed the above ""scheme"" line as follows: ``` scheme: internet-facing ``` ##### EXPECTED RESULTS I expected that Ansible noticed it had to change the ELB config. ##### ACTUAL RESULTS Ansible did _not_ notice it got work to do. I had to manually remove the ELB and re-run the playbook. ",True,"ec2_elb_lb: Ansible didn't detect ""scheme"" change - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /Users/rabe/Documents/Projects/IoT/Design/Devel/Env-Setup/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Changed the scheme from `internal` to `internet-facing`, and Ansible didn't detect that it had to change the state, i. e. perform some work. ##### STEPS TO REPRODUCE Originally had the following: ``` - name: Create Elastic Load Balancer for frontend ec2_elb_lb: name: ""{{ owner }}-elb-{{ env }}-fe"" region: ""{{ region }}"" subnets: - ""{{ mgmt_subnet_fe_1 }}"" - ""{{ mgmt_subnet_fe_2 }}"" state: present scheme: internal cross_az_load_balancing: yes listeners: - protocol: http instance_protocol: http load_balancer_port: 80 instance_port: 80 health_check: ping_protocol: http # options are http, https, ssl, tcp ping_port: 80 ping_path: ""/index.html"" # not required for tcp or ssl response_timeout: 5 # seconds interval: 30 # seconds unhealthy_threshold: 2 healthy_threshold: 10 instance_ids: - ""{{ mgmt_i_fe_1.tagged_instances[0].id }}"" - ""{{ mgmt_i_fe_2.tagged_instances[0].id }}"" tags: Name: ""{{ owner }}_elb_{{ env }}_fe"" Env: ""{{ owner }}_{{ env }}"" Tier: ""{{ owner }}_{{ env }}_frontend"" ``` Changed the above ""scheme"" line as follows: ``` scheme: internet-facing ``` ##### EXPECTED RESULTS I expected that Ansible noticed it had to change the ELB config. ##### ACTUAL RESULTS Ansible did _not_ notice it got work to do. I had to manually remove the ELB and re-run the playbook. ",1, elb lb ansible didn t detect scheme change issue type bug report component name elb lb ansible version ansible config file users rabe documents projects iot design devel env setup ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment n a summary changed the scheme from internal to internet facing and ansible didn t detect that it had to change the state i e perform some work steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used originally had the following name create elastic load balancer for frontend elb lb name owner elb env fe region region subnets mgmt subnet fe mgmt subnet fe state present scheme internal cross az load balancing yes listeners protocol http instance protocol http load balancer port instance port health check ping protocol http options are http https ssl tcp ping port ping path index html not required for tcp or ssl response timeout seconds interval seconds unhealthy threshold healthy threshold instance ids mgmt i fe tagged instances id mgmt i fe tagged instances id tags name owner elb env fe env owner env tier owner env frontend changed the above scheme line as follows scheme internet facing expected results i expected that ansible noticed it had to change the elb config actual results ansible did not notice it got work to do i had to manually remove the elb and re run the playbook ,1 1134,4998498507.0,IssuesEvent,2016-12-09 20:02:55,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,junos_config module fails if config statement not found on device ,affects_2.3 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config ##### ANSIBLE VERSION ``` ansible 2.3.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE Ensure interface ge-0/0/2 is not configured on device. Run below playbook. ``` tasks: - name: Config Using Core Module junos_config: host: ""{{ inventory_hostname }}"" username: ""xxx"" password: ""xxx"" lines: - delete interfaces ge-0/0/2 - set interfaces ge-0/0/2 unit 0 description lsn-mc1002 register: response1 ``` ##### EXPECTED RESULTS Ansible run should log a warning message ""warning: statement not found"" and continue to execute. Similar to load command on device ``` %cat configuration.txt delete interface ge-0/0/2 set interfaces ge-0/0/2 description test ``` ``` root@junos#load set configuration.txt warning: statement not found load complete root@junos#commit root@junos#show interfaces ge-0/0/2 unit 0 { description test } ``` ##### ACTUAL RESULTS ``` Ansible playbook fails with error message ""msg"": ""Unable to load config: ConfigLoadError(severity: warning, bad_element: None, message: warning: statement not found)"" ``` ",True,"junos_config module fails if config statement not found on device - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config ##### ANSIBLE VERSION ``` ansible 2.3.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE Ensure interface ge-0/0/2 is not configured on device. Run below playbook. ``` tasks: - name: Config Using Core Module junos_config: host: ""{{ inventory_hostname }}"" username: ""xxx"" password: ""xxx"" lines: - delete interfaces ge-0/0/2 - set interfaces ge-0/0/2 unit 0 description lsn-mc1002 register: response1 ``` ##### EXPECTED RESULTS Ansible run should log a warning message ""warning: statement not found"" and continue to execute. Similar to load command on device ``` %cat configuration.txt delete interface ge-0/0/2 set interfaces ge-0/0/2 description test ``` ``` root@junos#load set configuration.txt warning: statement not found load complete root@junos#commit root@junos#show interfaces ge-0/0/2 unit 0 { description test } ``` ##### ACTUAL RESULTS ``` Ansible playbook fails with error message ""msg"": ""Unable to load config: ConfigLoadError(severity: warning, bad_element: None, message: warning: statement not found)"" ``` ",1,junos config module fails if config statement not found on device issue type bug report component name junos config ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mac os summary steps to reproduce ensure interface ge is not configured on device run below playbook tasks name config using core module junos config host inventory hostname username xxx password xxx lines delete interfaces ge set interfaces ge unit description lsn register expected results ansible run should log a warning message warning statement not found and continue to execute similar to load command on device cat configuration txt delete interface ge set interfaces ge description test root junos load set configuration txt warning statement not found load complete root junos commit root junos show interfaces ge unit description test actual results ansible playbook fails with error message msg unable to load config configloaderror severity warning bad element none message warning statement not found ,1 1665,6574059785.0,IssuesEvent,2017-09-11 11:18:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible-2.2.0.0 group_by intermittent bug,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY The problem occurs on 2.2 but not on 2.1 A test case was created containing the following statement: debug: var=groups['mygroup1']|list If dynamic host group ""mygroup1"" created in a previous playbook by the group_by module contains four hosts, four hosts are output by the debug statement. If ""mygroup1"" contains eight hosts, only the first host is output by the debug statement most of the time but sometimes the second host is output. This is an intermittent problem. The problem does not occur when the host group contains 3 or 4 hosts but always occurs when the group contains 7 or 8 hosts. ##### STEPS TO REPRODUCE - name: Populate host group with host names # hosts: host1:host2:host3:host4 hosts: host1:host2:host3:host4:host5:host6:host7:host8 gather_facts: no tasks: - name: task1 group_by: key='mygroup1' - name: Run playbook on all hosts in host group ""mygroup1"" hosts: mygroup1 gather_facts: no tasks: - name: task1 debug: var=inventory_hostname - name: Show hosts in host group ""mygroup1"" hosts: localhost gather_facts: no tasks: - name: task1 debug: var=groups['mygroup1']|list ##### EXPECTED RESULTS TASK [task1] ******************************************************************* ok: [localhost] => { ""groups['mygroup1']|list"": [ ""host1"", ""host2"", ""host3"", ""host4"", ""host5"", ""host6"", ""host7"", ""host8"", ] } ##### ACTUAL RESULTS ``` TASK [task1] ******************************************************************* ok: [localhost] => { ""groups['mygroup1']|list"": [ ""host1"", ] } ``` ",True,"ansible-2.2.0.0 group_by intermittent bug - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY The problem occurs on 2.2 but not on 2.1 A test case was created containing the following statement: debug: var=groups['mygroup1']|list If dynamic host group ""mygroup1"" created in a previous playbook by the group_by module contains four hosts, four hosts are output by the debug statement. If ""mygroup1"" contains eight hosts, only the first host is output by the debug statement most of the time but sometimes the second host is output. This is an intermittent problem. The problem does not occur when the host group contains 3 or 4 hosts but always occurs when the group contains 7 or 8 hosts. ##### STEPS TO REPRODUCE - name: Populate host group with host names # hosts: host1:host2:host3:host4 hosts: host1:host2:host3:host4:host5:host6:host7:host8 gather_facts: no tasks: - name: task1 group_by: key='mygroup1' - name: Run playbook on all hosts in host group ""mygroup1"" hosts: mygroup1 gather_facts: no tasks: - name: task1 debug: var=inventory_hostname - name: Show hosts in host group ""mygroup1"" hosts: localhost gather_facts: no tasks: - name: task1 debug: var=groups['mygroup1']|list ##### EXPECTED RESULTS TASK [task1] ******************************************************************* ok: [localhost] => { ""groups['mygroup1']|list"": [ ""host1"", ""host2"", ""host3"", ""host4"", ""host5"", ""host6"", ""host7"", ""host8"", ] } ##### ACTUAL RESULTS ``` TASK [task1] ******************************************************************* ok: [localhost] => { ""groups['mygroup1']|list"": [ ""host1"", ] } ``` ",1,ansible group by intermittent bug issue type bug report component name group by ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment summary the problem occurs on but not on a test case was created containing the following statement debug var groups list if dynamic host group created in a previous playbook by the group by module contains four hosts four hosts are output by the debug statement if contains eight hosts only the first host is output by the debug statement most of the time but sometimes the second host is output this is an intermittent problem the problem does not occur when the host group contains or hosts but always occurs when the group contains or hosts steps to reproduce to reproduce the bug run the playbook with hosts if run with hosts the bug does not occur if run under ansible with hosts or any large number of hosts the bug does not occur name populate host group with host names hosts hosts gather facts no tasks name group by key name run playbook on all hosts in host group hosts gather facts no tasks name debug var inventory hostname name show hosts in host group hosts localhost gather facts no tasks name debug var groups list expected results task ok groups list actual results task ok groups list ,1 896,4554432795.0,IssuesEvent,2016-09-13 09:25:25,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,eos_template: TypeError: load_config() got an unexpected keyword argument 'session',affects_2.2 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_template ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9868117d1f) last updated 2016/09/12 20:18:28 (GMT +100) lib/ansible/modules/core: (devel 432ee70da1) last updated 2016/09/12 20:19:14 (GMT +100) lib/ansible/modules/extras: (devel 67a1bebbd3) last updated 2016/09/12 12:05:15 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/commit/e464599632340f247c593f2db770be9782f7dec5 changed `module.config.load_config` to pass in `session` however that doesn't seem to be in https://github.com/ansible/ansible/blame/devel/lib/ansible/module_utils/netcfg.py#L56 @privateip When you ran `ansible-playbook -vvv eos.yaml ` Did you have other local changes ##### STEPS TO REPRODUCE ``` ansible-playbook -vvv eos.yaml -i ../inventory-testnetwork -e ""limit_to=eos_template"" ``` ##### EXPECTED RESULTS Tests to pass ##### ACTUAL RESULTS ``` TASK [test_eos_template : configure device with config] ************************ task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_eos_template/tests/cli/force.yaml:14 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/eos/eos_template.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673 `"" && echo ansible-tmp-1473707985.23-162291198567673=""` echo $HOME/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673 `"" ) && sleep 0' PUT /tmp/tmpsPqOzL TO /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/eos_template.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/ /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/eos_template.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/eos_template.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_aJfsia/ansible_module_eos_template.py"", line 215, in main() File ""/tmp/ansible_aJfsia/ansible_module_eos_template.py"", line 205, in main commit=True) File ""/tmp/ansible_aJfsia/ansible_modlib.zip/ansible/module_utils/netcfg.py"", line 58, in load_config TypeError: load_config() got an unexpected keyword argument 'session' ``` ",True,"eos_template: TypeError: load_config() got an unexpected keyword argument 'session' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_template ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9868117d1f) last updated 2016/09/12 20:18:28 (GMT +100) lib/ansible/modules/core: (devel 432ee70da1) last updated 2016/09/12 20:19:14 (GMT +100) lib/ansible/modules/extras: (devel 67a1bebbd3) last updated 2016/09/12 12:05:15 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/commit/e464599632340f247c593f2db770be9782f7dec5 changed `module.config.load_config` to pass in `session` however that doesn't seem to be in https://github.com/ansible/ansible/blame/devel/lib/ansible/module_utils/netcfg.py#L56 @privateip When you ran `ansible-playbook -vvv eos.yaml ` Did you have other local changes ##### STEPS TO REPRODUCE ``` ansible-playbook -vvv eos.yaml -i ../inventory-testnetwork -e ""limit_to=eos_template"" ``` ##### EXPECTED RESULTS Tests to pass ##### ACTUAL RESULTS ``` TASK [test_eos_template : configure device with config] ************************ task path: /home/johnb/git/ansible-inc/test-network-modules/roles/test_eos_template/tests/cli/force.yaml:14 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/eos/eos_template.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673 `"" && echo ansible-tmp-1473707985.23-162291198567673=""` echo $HOME/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673 `"" ) && sleep 0' PUT /tmp/tmpsPqOzL TO /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/eos_template.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/ /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/eos_template.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/eos_template.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1473707985.23-162291198567673/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_aJfsia/ansible_module_eos_template.py"", line 215, in main() File ""/tmp/ansible_aJfsia/ansible_module_eos_template.py"", line 205, in main commit=True) File ""/tmp/ansible_aJfsia/ansible_modlib.zip/ansible/module_utils/netcfg.py"", line 58, in load_config TypeError: load_config() got an unexpected keyword argument 'session' ``` ",1,eos template typeerror load config got an unexpected keyword argument session issue type bug report component name eos template ansible version ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary changed module config load config to pass in session however that doesn t seem to be in privateip when you ran ansible playbook vvv eos yaml did you have other local changes steps to reproduce ansible playbook vvv eos yaml i inventory testnetwork e limit to eos template expected results tests to pass actual results task task path home johnb git ansible inc test network modules roles test eos template tests cli force yaml using module file home johnb git ansible inc ansible lib ansible modules core network eos eos template py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpspqozl to home johnb ansible tmp ansible tmp eos template py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp eos template py sleep exec bin sh c python home johnb ansible tmp ansible tmp eos template py rm rf home johnb ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ajfsia ansible module eos template py line in main file tmp ansible ajfsia ansible module eos template py line in main commit true file tmp ansible ajfsia ansible modlib zip ansible module utils netcfg py line in load config typeerror load config got an unexpected keyword argument session ,1 768,4375656331.0,IssuesEvent,2016-08-05 00:57:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,nxos_config exception calling module_utils.netcfg.NetworkConfig.difference() with arg 'path' ,bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = /home/ec2-user/proj-ansible-dev/ansible-bm-statecontrol/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Complete ansible.cfg follows... ``` $ cat /home/ec2-user/proj-ansible-dev/ansible-bm-statecontrol/ansible.cfg [defaults] # default inventory points to non-prod, just in case... inventory=./inven-lab transport=local gathering=explicit ``` ##### OS / ENVIRONMENT Running Ansible from: ``` Red Hat Enterprise Linux Server 7.2 (Maipo) Python 2.7.5 ``` running in a virtualenv Managing a Cisco Nexus switch ##### SUMMARY nxos_config failing with exception msg: ""MODULE FAILURE"" while calling ```module_utils.netcfg.NetworkConfig.difference()``` nxos_cfg calls difference() method with the 'path' argument. The difference() method does not accept the 'path' argument as of '@80ab80b' ##### STEPS TO REPRODUCE Run playbook below ``` --- - name: Basic Verification to Cisco Devices hosts: akldcb-acc-7s pre_tasks: - debug: var=hostvars[inventory_hostname] tasks: - name: ""Configure network interfaces"" nxos_config: lines: - ""no shutdown"" parents: ['interface Ethernet1/44'] provider: { host: ""akldcb-acc-7s"", ssh_keyfile: ""/home/ec2-user/.ssh/sys_Networkautomate_rsa"", username: ""sys_Networkautomate"", } ``` ##### EXPECTED RESULTS Configuration is correctly applied to the Nexus switch. The playbook verified as working in Ansible v2.1 ##### ACTUAL RESULTS Failing with exception message ""MODULE FAILURE"" ``` TASK [Configure network interfaces] ******************************************** task path: /home/ec2-user/proj-ansible-dev/ansible-bm-statecontrol/cisco_basic_check.yaml:9 Using module file /home/ec2-user/proj-ansible-dev/env/lib/python2.7/site-packages/ansible/modules/core/network/nxos/nxos_config.py ESTABLISH LOCAL CONNECTION FOR USER: ec2-user EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569 `"" && echo ansible-tmp-1470310590.63-137000207474569=""` echo $HOME/.ansible/tm p/ansible-tmp-1470310590.63-137000207474569 `"" ) && sleep 0' PUT /tmp/tmpPoERwO TO /home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569/nxos_config.py EXEC /bin/sh -c 'chmod -R u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569/ && sleep 0' EXEC /bin/sh -c '/usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569/nxos_config.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000 207474569/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_hzmwpP/ansible_module_nxos_config.py"", line 229, in main() File ""/tmp/ansible_hzmwpP/ansible_module_nxos_config.py"", line 200, in main commands = candidate.difference(config, path=parents, match=match, replace=replace) TypeError: difference() got an unexpected keyword argument 'path' fatal: [akldcb-acc-7s]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_config"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_hzmwpP/ansible_module_nxos_config.py\"", line 229, in \n main()\n File \""/tmp/ansible_hzmwpP/ansible_module_nxos _config.py\"", line 200, in main\n commands = candidate.difference(config, path=parents, match=match, replace=replace)\nTypeError: difference() got an unexpected keyword argument 'path'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false } ``` ",True,"nxos_config exception calling module_utils.netcfg.NetworkConfig.difference() with arg 'path' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_config ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = /home/ec2-user/proj-ansible-dev/ansible-bm-statecontrol/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Complete ansible.cfg follows... ``` $ cat /home/ec2-user/proj-ansible-dev/ansible-bm-statecontrol/ansible.cfg [defaults] # default inventory points to non-prod, just in case... inventory=./inven-lab transport=local gathering=explicit ``` ##### OS / ENVIRONMENT Running Ansible from: ``` Red Hat Enterprise Linux Server 7.2 (Maipo) Python 2.7.5 ``` running in a virtualenv Managing a Cisco Nexus switch ##### SUMMARY nxos_config failing with exception msg: ""MODULE FAILURE"" while calling ```module_utils.netcfg.NetworkConfig.difference()``` nxos_cfg calls difference() method with the 'path' argument. The difference() method does not accept the 'path' argument as of '@80ab80b' ##### STEPS TO REPRODUCE Run playbook below ``` --- - name: Basic Verification to Cisco Devices hosts: akldcb-acc-7s pre_tasks: - debug: var=hostvars[inventory_hostname] tasks: - name: ""Configure network interfaces"" nxos_config: lines: - ""no shutdown"" parents: ['interface Ethernet1/44'] provider: { host: ""akldcb-acc-7s"", ssh_keyfile: ""/home/ec2-user/.ssh/sys_Networkautomate_rsa"", username: ""sys_Networkautomate"", } ``` ##### EXPECTED RESULTS Configuration is correctly applied to the Nexus switch. The playbook verified as working in Ansible v2.1 ##### ACTUAL RESULTS Failing with exception message ""MODULE FAILURE"" ``` TASK [Configure network interfaces] ******************************************** task path: /home/ec2-user/proj-ansible-dev/ansible-bm-statecontrol/cisco_basic_check.yaml:9 Using module file /home/ec2-user/proj-ansible-dev/env/lib/python2.7/site-packages/ansible/modules/core/network/nxos/nxos_config.py ESTABLISH LOCAL CONNECTION FOR USER: ec2-user EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569 `"" && echo ansible-tmp-1470310590.63-137000207474569=""` echo $HOME/.ansible/tm p/ansible-tmp-1470310590.63-137000207474569 `"" ) && sleep 0' PUT /tmp/tmpPoERwO TO /home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569/nxos_config.py EXEC /bin/sh -c 'chmod -R u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569/ && sleep 0' EXEC /bin/sh -c '/usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000207474569/nxos_config.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1470310590.63-137000 207474569/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_hzmwpP/ansible_module_nxos_config.py"", line 229, in main() File ""/tmp/ansible_hzmwpP/ansible_module_nxos_config.py"", line 200, in main commands = candidate.difference(config, path=parents, match=match, replace=replace) TypeError: difference() got an unexpected keyword argument 'path' fatal: [akldcb-acc-7s]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""nxos_config"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_hzmwpP/ansible_module_nxos_config.py\"", line 229, in \n main()\n File \""/tmp/ansible_hzmwpP/ansible_module_nxos _config.py\"", line 200, in main\n commands = candidate.difference(config, path=parents, match=match, replace=replace)\nTypeError: difference() got an unexpected keyword argument 'path'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false } ``` ",1,nxos config exception calling module utils netcfg networkconfig difference with arg path issue type bug report component name nxos config ansible version ansible config file home user proj ansible dev ansible bm statecontrol ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables complete ansible cfg follows cat home user proj ansible dev ansible bm statecontrol ansible cfg default inventory points to non prod just in case inventory inven lab transport local gathering explicit os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running ansible from red hat enterprise linux server maipo python running in a virtualenv managing a cisco nexus switch summary nxos config failing with exception msg module failure while calling module utils netcfg networkconfig difference nxos cfg calls difference method with the path argument the difference method does not accept the path argument as of steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run playbook below name basic verification to cisco devices hosts akldcb acc pre tasks debug var hostvars tasks name configure network interfaces nxos config lines no shutdown parents provider host akldcb acc ssh keyfile home user ssh sys networkautomate rsa username sys networkautomate expected results configuration is correctly applied to the nexus switch the playbook verified as working in ansible actual results failing with exception message module failure task task path home user proj ansible dev ansible bm statecontrol cisco basic check yaml using module file home user proj ansible dev env lib site packages ansible modules core network nxos nxos config py establish local connection for user user exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tm p ansible tmp sleep put tmp tmppoerwo to home user ansible tmp ansible tmp nxos config py exec bin sh c chmod r u x home user ansible tmp ansible tmp sleep exec bin sh c usr bin python home user ansible tmp ansible tmp nxos config py rm rf home user ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible hzmwpp ansible module nxos config py line in main file tmp ansible hzmwpp ansible module nxos config py line in main commands candidate difference config path parents match match replace replace typeerror difference got an unexpected keyword argument path fatal failed changed false failed true invocation module name nxos config module stderr traceback most recent call last n file tmp ansible hzmwpp ansible module nxos config py line in n main n file tmp ansible hzmwpp ansible module nxos config py line in main n commands candidate difference config path parents match match replace replace ntypeerror difference got an unexpected keyword argument path n module stdout msg module failure parsed false ,1 1866,6577487371.0,IssuesEvent,2017-09-12 01:15:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible subversion module silently fails on network problems,affects_2.0 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT ubuntu 14.04.4 trusty fully upgraded ##### SUMMARY sometimes - because of network errors - subversion can't finish the checkout, but this should raise an error, ansible though moves on believing evertyhing is okay I could also say, that if I do an ""svn update"" after a successful ansible checkout zero changes should happen (if the server side did not change of course). ##### STEPS TO REPRODUCE create or find a large svn repo checkout with subversion module stop your network interface while downloading ##### EXPECTED RESULTS ansible should exit with an error ##### ACTUAL RESULTS ansible moves on like if checkout was a success ",True,"ansible subversion module silently fails on network problems - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT ubuntu 14.04.4 trusty fully upgraded ##### SUMMARY sometimes - because of network errors - subversion can't finish the checkout, but this should raise an error, ansible though moves on believing evertyhing is okay I could also say, that if I do an ""svn update"" after a successful ansible checkout zero changes should happen (if the server side did not change of course). ##### STEPS TO REPRODUCE create or find a large svn repo checkout with subversion module stop your network interface while downloading ##### EXPECTED RESULTS ansible should exit with an error ##### ACTUAL RESULTS ansible moves on like if checkout was a success ",1,ansible subversion module silently fails on network problems issue type bug report component name subversion ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration nothing special os environment ubuntu trusty fully upgraded summary sometimes because of network errors subversion can t finish the checkout but this should raise an error ansible though moves on believing evertyhing is okay i could also say that if i do an svn update after a successful ansible checkout zero changes should happen if the server side did not change of course steps to reproduce create or find a large svn repo checkout with subversion module stop your network interface while downloading expected results ansible should exit with an error actual results ansible moves on like if checkout was a success ,1 1312,5558697078.0,IssuesEvent,2017-03-24 15:18:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service doesn't pass environment to docker-compose,affects_2.1 bug_report cloud docker waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /usr/local/etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY `docker_service` task doesn't pass environment variables defined using `environment` option at task- or playbook-level to the `docker-compose.yml` when using `project_src`. ##### STEPS TO REPRODUCE Playbook: ``` yaml --- - name: demo tasks: - docker_service: project_src=/tmp/compose-dir environment: MYSQL_DB: ""test"" ``` `docker-compose.yml` at `/tmp/compose-dir`: ``` yaml version: '2' services: mysql: image: mysql:5.7 environment: MYSQL_DATABASE: ${MYSQL_DB} ``` ##### EXPECTED RESULTS `mysql` container should see `MYSQL_DATABASE=test` and create database ""test"". ##### ACTUAL RESULTS ""test"" database doesn't created But when calling docker-compose directly: ``` MYSQL_DB=test docker-compose up ``` the ""test"" database is created ",True,"docker_service doesn't pass environment to docker-compose - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /usr/local/etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY `docker_service` task doesn't pass environment variables defined using `environment` option at task- or playbook-level to the `docker-compose.yml` when using `project_src`. ##### STEPS TO REPRODUCE Playbook: ``` yaml --- - name: demo tasks: - docker_service: project_src=/tmp/compose-dir environment: MYSQL_DB: ""test"" ``` `docker-compose.yml` at `/tmp/compose-dir`: ``` yaml version: '2' services: mysql: image: mysql:5.7 environment: MYSQL_DATABASE: ${MYSQL_DB} ``` ##### EXPECTED RESULTS `mysql` container should see `MYSQL_DATABASE=test` and create database ""test"". ##### ACTUAL RESULTS ""test"" database doesn't created But when calling docker-compose directly: ``` MYSQL_DB=test docker-compose up ``` the ""test"" database is created ",1,docker service doesn t pass environment to docker compose issue type bug report component name docker service ansible version ansible config file usr local etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary docker service task doesn t pass environment variables defined using environment option at task or playbook level to the docker compose yml when using project src steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used playbook yaml name demo tasks docker service project src tmp compose dir environment mysql db test docker compose yml at tmp compose dir yaml version services mysql image mysql environment mysql database mysql db expected results mysql container should see mysql database test and create database test actual results test database doesn t created but when calling docker compose directly mysql db test docker compose up the test database is created ,1 1043,4847233341.0,IssuesEvent,2016-11-10 14:25:37,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"cloudformation module on Ansible 2.2.0 throws ""PhysicalResourceId"" error intermittently.",affects_2.2 aws bug_report cloud in progress waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloudformation ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY cloudformation module on Ansible 2.2.0 throws ""PhysicalResourceId"" error intermittently. ##### STEPS TO REPRODUCE Push a stack with Ansible cloudformation module on Ansible 2.2.0 ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'PhysicalResourceId' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Kx3c0c/ansible_module_cloudformation.py\"", line 483, in \n main()\n File \""/tmp/ansible_Kx3c0c/ansible_module_cloudformation.py\"", line 450, in main\n \""physical_resource_id\"": res['PhysicalResourceId'],\nKeyError: 'PhysicalResourceId'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} to retry, use: --limit ``` ",True,"cloudformation module on Ansible 2.2.0 throws ""PhysicalResourceId"" error intermittently. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloudformation ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY cloudformation module on Ansible 2.2.0 throws ""PhysicalResourceId"" error intermittently. ##### STEPS TO REPRODUCE Push a stack with Ansible cloudformation module on Ansible 2.2.0 ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'PhysicalResourceId' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Kx3c0c/ansible_module_cloudformation.py\"", line 483, in \n main()\n File \""/tmp/ansible_Kx3c0c/ansible_module_cloudformation.py\"", line 450, in main\n \""physical_resource_id\"": res['PhysicalResourceId'],\nKeyError: 'PhysicalResourceId'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} to retry, use: --limit ``` ",1,cloudformation module on ansible throws physicalresourceid error intermittently issue type bug report component name cloudformation ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary cloudformation module on ansible throws physicalresourceid error intermittently steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used push a stack with ansible cloudformation module on ansible expected results actual results an exception occurred during task execution to see the full traceback use vvv the error was keyerror physicalresourceid fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible ansible module cloudformation py line in n main n file tmp ansible ansible module cloudformation py line in main n physical resource id res nkeyerror physicalresourceid n module stdout msg module failure to retry use limit ,1 1127,4997919320.0,IssuesEvent,2016-12-09 18:11:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add state=query option for OS Packaging Modules,affects_2.1 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - yum - apt - dnf (but this is in the ansible-modules-extras repo) ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### OS / ENVIRONMENT - Managing Linux ##### SUMMARY I would like to query for the installation status of a particular package without actually changing the state of the package on the machine. This would allow one to register a variable to act upon in some other way. It seems the way this is generally done is to run the shell module and execute commands to determine whether a package is installed. For example: ``` shell: rpm -q register: pkg_query changed_when: false ``` Instead, I would like to use the OS Packaging Modules to query for the installation status of a particular package. This would be like adding a `state: query` option to some of the OS Packaging Modules which would then allow one to register a variable to capture the result. This would also help cleanup repetitive tasks just to handle different package managers: `apt`, `yum` and `dnf` are the ones that come to mind where this would be useful. ",True,"Add state=query option for OS Packaging Modules - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - yum - apt - dnf (but this is in the ansible-modules-extras repo) ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### OS / ENVIRONMENT - Managing Linux ##### SUMMARY I would like to query for the installation status of a particular package without actually changing the state of the package on the machine. This would allow one to register a variable to act upon in some other way. It seems the way this is generally done is to run the shell module and execute commands to determine whether a package is installed. For example: ``` shell: rpm -q register: pkg_query changed_when: false ``` Instead, I would like to use the OS Packaging Modules to query for the installation status of a particular package. This would be like adding a `state: query` option to some of the OS Packaging Modules which would then allow one to register a variable to capture the result. This would also help cleanup repetitive tasks just to handle different package managers: `apt`, `yum` and `dnf` are the ones that come to mind where this would be useful. ",1,add state query option for os packaging modules issue type feature idea component name yum apt dnf but this is in the ansible modules extras repo ansible version ansible os environment managing linux summary i would like to query for the installation status of a particular package without actually changing the state of the package on the machine this would allow one to register a variable to act upon in some other way it seems the way this is generally done is to run the shell module and execute commands to determine whether a package is installed for example shell rpm q register pkg query changed when false instead i would like to use the os packaging modules to query for the installation status of a particular package this would be like adding a state query option to some of the os packaging modules which would then allow one to register a variable to capture the result this would also help cleanup repetitive tasks just to handle different package managers apt yum and dnf are the ones that come to mind where this would be useful for bugs show exactly how to reproduce the problem for new features show how the feature would be used ,1 1667,6574070847.0,IssuesEvent,2017-09-11 11:21:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_eip does not assign an ip to an eni,affects_2.3 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_eip ##### ANSIBLE VERSION ansible 2.3.0 (devel a83b00bbc0) last updated 2016/11/29 22:28:22 (GMT +000) ##### CONFIGURATION standard ##### OS / ENVIRONMENT centos ##### SUMMARY If trying to assign an eip to an eni, the task will fail with a boto ""MissingParameter"" error. ##### STEPS TO REPRODUCE Run the following task. ``` ec2_eip: region: ""{{ aws_region }}"" device_id: ""{{ eni_a.interface.id }}"" reuse_existing_ip_allowed: yes state: present ``` ##### EXPECTED RESULTS The ENI is assigned either a spare EIP or a new EIP ##### ACTUAL RESULTS Task fails with the following message: ``` ""EC2ResponseError: 400 Bad Request\n\nMissingParameterEither public IP or allocation id must be specifiedab0b561f-47fa-4f43-80e4-8701b6666602"" ```",True,"ec2_eip does not assign an ip to an eni - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_eip ##### ANSIBLE VERSION ansible 2.3.0 (devel a83b00bbc0) last updated 2016/11/29 22:28:22 (GMT +000) ##### CONFIGURATION standard ##### OS / ENVIRONMENT centos ##### SUMMARY If trying to assign an eip to an eni, the task will fail with a boto ""MissingParameter"" error. ##### STEPS TO REPRODUCE Run the following task. ``` ec2_eip: region: ""{{ aws_region }}"" device_id: ""{{ eni_a.interface.id }}"" reuse_existing_ip_allowed: yes state: present ``` ##### EXPECTED RESULTS The ENI is assigned either a spare EIP or a new EIP ##### ACTUAL RESULTS Task fails with the following message: ``` ""EC2ResponseError: 400 Bad Request\n\nMissingParameterEither public IP or allocation id must be specifiedab0b561f-47fa-4f43-80e4-8701b6666602"" ```",1, eip does not assign an ip to an eni issue type bug report component name eip ansible version ansible devel last updated gmt configuration standard os environment centos summary if trying to assign an eip to an eni the task will fail with a boto missingparameter error steps to reproduce run the following task eip region aws region device id eni a interface id reuse existing ip allowed yes state present expected results the eni is assigned either a spare eip or a new eip actual results task fails with the following message bad request n n missingparameter either public ip or allocation id must be specified ,1 1724,6574505992.0,IssuesEvent,2017-09-11 13:08:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,yum list installed doesn't show source repo,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - yum ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/ak/ansible/webservers/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Ansible Host: CentOS7 Managed: CentOS7 ##### SUMMARY no repo name is provided for yum list=installed ##### STEPS TO REPRODUCE ``` tasks: - name: yum list yum: list=installed register: output - name: show all debug: ""msg={{ output.results }}"" ``` ##### EXPECTED RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""@base"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""@anaconda"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` ##### ACTUAL RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""installed"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""installed"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` Also: compare with `yum list installed` in command-line: ``` yum-utils.noarch 1.1.31-34.el7 @base zlib.x86_64 1.2.7-15.el7 @anaconda ``` ",True,"yum list installed doesn't show source repo - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - yum ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/ak/ansible/webservers/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Ansible Host: CentOS7 Managed: CentOS7 ##### SUMMARY no repo name is provided for yum list=installed ##### STEPS TO REPRODUCE ``` tasks: - name: yum list yum: list=installed register: output - name: show all debug: ""msg={{ output.results }}"" ``` ##### EXPECTED RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""@base"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""@anaconda"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` ##### ACTUAL RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""installed"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""installed"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` Also: compare with `yum list installed` in command-line: ``` yum-utils.noarch 1.1.31-34.el7 @base zlib.x86_64 1.2.7-15.el7 @anaconda ``` ",1,yum list installed doesn t show source repo issue type bug report component name yum ansible version ansible config file home ak ansible webservers ansible cfg configured module search path default w o overrides os environment ansible host managed summary no repo name is provided for yum list installed steps to reproduce tasks name yum list yum list installed register output name show all debug msg output results expected results arch noarch epoch name yum utils nevra yum utils noarch release repo base version yumstate installed arch epoch name zlib nevra zlib release repo anaconda version yumstate installed actual results arch noarch epoch name yum utils nevra yum utils noarch release repo installed version yumstate installed arch epoch name zlib nevra zlib release repo installed version yumstate installed also compare with yum list installed in command line yum utils noarch base zlib anaconda ,1 756,4351919971.0,IssuesEvent,2016-08-01 02:56:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"glance_image module sets HAS_GLANCECLIENT, but checks for HAVE_GLANCECLIENT",bug_report cloud waiting_on_maintainer,"On line 137 glance_image module does: ``` try: import glanceclient HAS_GLANCECLIENT = True except ImportError: HAS_GLANCECLIENT = False try: from keystoneclient.v2_0 import client as ksclient HAS_KEYSTONECLIENT = True except ImportError: HAS_KEYSTONECLIENT= False ``` then later on line 250: if not HAVE_GLANCECLIENT: module.fail_json(msg='python-glanceclient is required for this module') if not HAVE_KEYSTONECLIENT: module.fail_json(msg='python-keystoneclient is required for this module') Probibly should be setting the same variable it's checking for?",True,"glance_image module sets HAS_GLANCECLIENT, but checks for HAVE_GLANCECLIENT - On line 137 glance_image module does: ``` try: import glanceclient HAS_GLANCECLIENT = True except ImportError: HAS_GLANCECLIENT = False try: from keystoneclient.v2_0 import client as ksclient HAS_KEYSTONECLIENT = True except ImportError: HAS_KEYSTONECLIENT= False ``` then later on line 250: if not HAVE_GLANCECLIENT: module.fail_json(msg='python-glanceclient is required for this module') if not HAVE_KEYSTONECLIENT: module.fail_json(msg='python-keystoneclient is required for this module') Probibly should be setting the same variable it's checking for?",1,glance image module sets has glanceclient but checks for have glanceclient on line glance image module does try import glanceclient has glanceclient true except importerror has glanceclient false try from keystoneclient import client as ksclient has keystoneclient true except importerror has keystoneclient false then later on line if not have glanceclient module fail json msg python glanceclient is required for this module if not have keystoneclient module fail json msg python keystoneclient is required for this module probibly should be setting the same variable it s checking for ,1 1561,6572254771.0,IssuesEvent,2017-09-11 00:39:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Yum plugin doesn't support update-to,affects_2.3 feature_idea waiting_on_maintainer,"With yum I can say `yum update-to foo-1.2`, which will make sure that the package foo gets updated to specifically version 1.2. It won't install foo-1.2 if a version of foo wasn't already on the system and it won't update the foo package to anything higher than version 1.2 (`yum update foo-1.2` is the same as `yum update foo` if the version of foo already installed is 1.2). Also, update-to without a version is the same as update. e.g. `yum update foo` == `yum update-to foo` Something like `yum: name=httpd-2.2.29-1.4.amzn1 state=update-to`? ",True,"Yum plugin doesn't support update-to - With yum I can say `yum update-to foo-1.2`, which will make sure that the package foo gets updated to specifically version 1.2. It won't install foo-1.2 if a version of foo wasn't already on the system and it won't update the foo package to anything higher than version 1.2 (`yum update foo-1.2` is the same as `yum update foo` if the version of foo already installed is 1.2). Also, update-to without a version is the same as update. e.g. `yum update foo` == `yum update-to foo` Something like `yum: name=httpd-2.2.29-1.4.amzn1 state=update-to`? ",1,yum plugin doesn t support update to with yum i can say yum update to foo which will make sure that the package foo gets updated to specifically version it won t install foo if a version of foo wasn t already on the system and it won t update the foo package to anything higher than version yum update foo is the same as yum update foo if the version of foo already installed is also update to without a version is the same as update e g yum update foo yum update to foo something like yum name httpd state update to ,1 1856,6577402365.0,IssuesEvent,2017-09-12 00:39:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,os_router: HA interfaces break os_router module.,affects_2.0 bug_report cloud openstack waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY The HA ports cause issues when deleting a router through this module. This means that any router updates or deletions through this module will fail. Currently, for updates, the code retrieves all internal interfaces of a router(including the HA ports), then tries to delete them. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L330 The principle is the same for deletion. However, neutron does not allow these interfaces to be deleted and will throw an error on any such attempt. ##### STEPS TO REPRODUCE 1. Create a router using the os_router module on an environment running Neutron L3HA using the VRRP protocol(i'm unsure about DVR). 2. Update it's configurations 3. Re-run the playbooks. They will fail when trying to delete the HA ports. ##### EXPECTED RESULTS The playbooks will fail to run. ",True,"os_router: HA interfaces break os_router module. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY The HA ports cause issues when deleting a router through this module. This means that any router updates or deletions through this module will fail. Currently, for updates, the code retrieves all internal interfaces of a router(including the HA ports), then tries to delete them. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L330 The principle is the same for deletion. However, neutron does not allow these interfaces to be deleted and will throw an error on any such attempt. ##### STEPS TO REPRODUCE 1. Create a router using the os_router module on an environment running Neutron L3HA using the VRRP protocol(i'm unsure about DVR). 2. Update it's configurations 3. Re-run the playbooks. They will fail when trying to delete the HA ports. ##### EXPECTED RESULTS The playbooks will fail to run. ",1,os router ha interfaces break os router module issue type bug report component name os router py ansible version ansible os environment na summary the ha ports cause issues when deleting a router through this module this means that any router updates or deletions through this module will fail currently for updates the code retrieves all internal interfaces of a router including the ha ports then tries to delete them see the principle is the same for deletion however neutron does not allow these interfaces to be deleted and will throw an error on any such attempt steps to reproduce create a router using the os router module on an environment running neutron using the vrrp protocol i m unsure about dvr update it s configurations re run the playbooks they will fail when trying to delete the ha ports expected results the playbooks will fail to run ,1 791,4389802065.0,IssuesEvent,2016-08-08 23:41:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive on OSX does not extract the archive and fails with an error,bug_report P2 waiting_on_maintainer," ##### ISSUE TYPE Bug ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION Ansible on OSX 10.10, installed through Homebrew. ``` $ ansible --version ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Not settings changed (to my knowledge ;-)). ##### OS / ENVIRONMENT OSX 10.10, Homebrew ##### SUMMARY When executing the play pasted below Ansible fails when excuting the unarchive command. EDIT: This is a regression because it happens with my existing plays which worked perfectly not long ago. I am pretty sure that this problem did not exist in the Ansible 2.0.x release on OSX using Homebrew. ##### STEPS TO REPRODUCE 1. Copy the play below into a file 2. Download the archive file (http://share.astina.io/openssl-certs.tar.gz) and put it besides the file. Alternatively you can just create your own archive, I guess the contents are irrelevant (maybe it is relevant it being a `tar.gz` archive) 3. Run `ansible-playbook -vvvv main.yml` ``` --- - name: Test unarchive hosts: 127.0.0.1 connection: local tasks: - name: Create destination folder file: path=test-certificates state=directory - name: Install public certificates for OpenSSL unarchive: src=openssl-certs.tar.gz dest=test-certificates creates=test-certificates/VeriSign_Universal_Root_Certification_Authority.pem ``` ##### EXPECTED RESULTS The play runs through and in the play's directory there is a directory called `test-certificates` with the contents of the archive. ##### ACTUAL RESULTS Ansible fails with the following error: ``` fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""copy"": true, ""creates"": ""test-certificates/VeriSign_Universal_Root_Certification_Authority.pem"", ""delimiter"": null, ""dest"": ""test-certificates"", ""directory_mode"": null, ""exclude"": [], ""extra_opts"": [], ""follow"": false, ""force"": null, ""group"": null, ""keep_newer"": false, ""list_files"": false, ""mode"": null, ""original_basename"": ""openssl-certs.tar.gz"", ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/Users/raffaele/.ansible/tmp/ansible-tmp-1465980002.6-168001143856730/source""}}, ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: 'test-certificates/ACCVRAIZ1.pem'"", ""stat"": {""exists"": false}} ``` ",True,"unarchive on OSX does not extract the archive and fails with an error - ##### ISSUE TYPE Bug ##### COMPONENT NAME unarchive module ##### ANSIBLE VERSION Ansible on OSX 10.10, installed through Homebrew. ``` $ ansible --version ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Not settings changed (to my knowledge ;-)). ##### OS / ENVIRONMENT OSX 10.10, Homebrew ##### SUMMARY When executing the play pasted below Ansible fails when excuting the unarchive command. EDIT: This is a regression because it happens with my existing plays which worked perfectly not long ago. I am pretty sure that this problem did not exist in the Ansible 2.0.x release on OSX using Homebrew. ##### STEPS TO REPRODUCE 1. Copy the play below into a file 2. Download the archive file (http://share.astina.io/openssl-certs.tar.gz) and put it besides the file. Alternatively you can just create your own archive, I guess the contents are irrelevant (maybe it is relevant it being a `tar.gz` archive) 3. Run `ansible-playbook -vvvv main.yml` ``` --- - name: Test unarchive hosts: 127.0.0.1 connection: local tasks: - name: Create destination folder file: path=test-certificates state=directory - name: Install public certificates for OpenSSL unarchive: src=openssl-certs.tar.gz dest=test-certificates creates=test-certificates/VeriSign_Universal_Root_Certification_Authority.pem ``` ##### EXPECTED RESULTS The play runs through and in the play's directory there is a directory called `test-certificates` with the contents of the archive. ##### ACTUAL RESULTS Ansible fails with the following error: ``` fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""copy"": true, ""creates"": ""test-certificates/VeriSign_Universal_Root_Certification_Authority.pem"", ""delimiter"": null, ""dest"": ""test-certificates"", ""directory_mode"": null, ""exclude"": [], ""extra_opts"": [], ""follow"": false, ""force"": null, ""group"": null, ""keep_newer"": false, ""list_files"": false, ""mode"": null, ""original_basename"": ""openssl-certs.tar.gz"", ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/Users/raffaele/.ansible/tmp/ansible-tmp-1465980002.6-168001143856730/source""}}, ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: 'test-certificates/ACCVRAIZ1.pem'"", ""stat"": {""exists"": false}} ``` ",1,unarchive on osx does not extract the archive and fails with an error issue type bug component name unarchive module ansible version ansible on osx installed through homebrew ansible version ansible config file configured module search path default w o overrides configuration not settings changed to my knowledge os environment osx homebrew summary when executing the play pasted below ansible fails when excuting the unarchive command edit this is a regression because it happens with my existing plays which worked perfectly not long ago i am pretty sure that this problem did not exist in the ansible x release on osx using homebrew steps to reproduce copy the play below into a file download the archive file and put it besides the file alternatively you can just create your own archive i guess the contents are irrelevant maybe it is relevant it being a tar gz archive run ansible playbook vvvv main yml name test unarchive hosts connection local tasks name create destination folder file path test certificates state directory name install public certificates for openssl unarchive src openssl certs tar gz dest test certificates creates test certificates verisign universal root certification authority pem expected results the play runs through and in the play s directory there is a directory called test certificates with the contents of the archive actual results ansible fails with the following error fatal failed changed false failed true invocation module args backup null content null copy true creates test certificates verisign universal root certification authority pem delimiter null dest test certificates directory mode null exclude extra opts follow false force null group null keep newer false list files false mode null original basename openssl certs tar gz owner null regexp null remote src null selevel null serole null setype null seuser null src users raffaele ansible tmp ansible tmp source msg unexpected error when accessing exploded file no such file or directory test certificates pem stat exists false ,1 1765,6575022287.0,IssuesEvent,2017-09-11 14:48:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Timeout: Cannot save running-config with nxos_command or save: yes in nxos_config,affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_command nxos_config ##### ANSIBLE VERSION ``` ansible 2.3.0~git20161010.03765ba config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **NX-OSv 7.3(0)D1(1)** ##### SUMMARY cf. title ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [all:vars] nms_mgt_ip_address=172.21.100.1 [spines] NX_OSv_Spine_11 ansible_host=172.21.100.11 NX_OSv_Spine_12 ansible_host=172.21.100.12 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/nx_osv/connections.yml and a symbolic link in roles/nxos_snmp/defaults points to nx_osv ``` connections ... nxapi: transport: nxapi host: ""{{ ansible_host }}"" # ansible_port port: ""{{ http.port }}"" # ansible_user username: admin password: xxxxxxxx # enable_secret_password auth_pass: xxxxxxxx # http or https use_ssl: no validate_certs: ""{{ https.validate_certs }}"" ``` **Role**: nxos_snmp: ``` - include_vars: ""../defaults/{{ os_family }}/connections.yml"" ... - name: Saving the running-config nxos_command: provider: ""{{ connections.nxapi }}"" commands: - ""copy running-config startup-config"" register: result ``` **Playbook**: ``` - name: Configuring SNMP on NX-OS/NX-OSv hosts: - nx_osv roles: - nxos_snmp ``` ##### EXPECTED RESULTS We should be able to save the modifications. ##### ACTUAL RESULTS ``` TASK [nxos_snmp : Saving the running-config] *********************************** fatal: [NX_OSv_Spine_12]: FAILED! => {""changed"": false, ""clierror"": ""Syntax error while parsing 'copy running-config startup-config | xml '\n\n\nCmd exec error.\n"", ""code"": ""400"", ""failed"": true, ""input"": ""copy running-config startup-config"", ""msg"": ""Input CLI command error"", ""output"": {""clierror"": ""Syntax error while parsing 'copy running-config startup-config | xml '\n\n\nCmd exec error.\n"", ""code"": ""400"", ""input"": ""copy running-config startup-config"", ""msg"": ""Input CLI command error""}, ""url"": ""http://172.21.100.12:8080/ins""} ``` No issue when configuring through the CLI: ``` NX_OSv_Spine_12# copy running-config startup-config [########################################] 100% Copy complete. NX_OSv_Spine_12# show startup-config !Command: show startup-config !Time: Mon Oct 10 18:12:08 2016 !Startup config saved at: Mon Oct 10 18:01:55 2016 version 7.3(0)D1(1) power redundancy-mode redundant ``` ",True,"Timeout: Cannot save running-config with nxos_command or save: yes in nxos_config - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_command nxos_config ##### ANSIBLE VERSION ``` ansible 2.3.0~git20161010.03765ba config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **NX-OSv 7.3(0)D1(1)** ##### SUMMARY cf. title ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [all:vars] nms_mgt_ip_address=172.21.100.1 [spines] NX_OSv_Spine_11 ansible_host=172.21.100.11 NX_OSv_Spine_12 ansible_host=172.21.100.12 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/nx_osv/connections.yml and a symbolic link in roles/nxos_snmp/defaults points to nx_osv ``` connections ... nxapi: transport: nxapi host: ""{{ ansible_host }}"" # ansible_port port: ""{{ http.port }}"" # ansible_user username: admin password: xxxxxxxx # enable_secret_password auth_pass: xxxxxxxx # http or https use_ssl: no validate_certs: ""{{ https.validate_certs }}"" ``` **Role**: nxos_snmp: ``` - include_vars: ""../defaults/{{ os_family }}/connections.yml"" ... - name: Saving the running-config nxos_command: provider: ""{{ connections.nxapi }}"" commands: - ""copy running-config startup-config"" register: result ``` **Playbook**: ``` - name: Configuring SNMP on NX-OS/NX-OSv hosts: - nx_osv roles: - nxos_snmp ``` ##### EXPECTED RESULTS We should be able to save the modifications. ##### ACTUAL RESULTS ``` TASK [nxos_snmp : Saving the running-config] *********************************** fatal: [NX_OSv_Spine_12]: FAILED! => {""changed"": false, ""clierror"": ""Syntax error while parsing 'copy running-config startup-config | xml '\n\n\nCmd exec error.\n"", ""code"": ""400"", ""failed"": true, ""input"": ""copy running-config startup-config"", ""msg"": ""Input CLI command error"", ""output"": {""clierror"": ""Syntax error while parsing 'copy running-config startup-config | xml '\n\n\nCmd exec error.\n"", ""code"": ""400"", ""input"": ""copy running-config startup-config"", ""msg"": ""Input CLI command error""}, ""url"": ""http://172.21.100.12:8080/ins""} ``` No issue when configuring through the CLI: ``` NX_OSv_Spine_12# copy running-config startup-config [########################################] 100% Copy complete. NX_OSv_Spine_12# show startup-config !Command: show startup-config !Time: Mon Oct 10 18:12:08 2016 !Startup config saved at: Mon Oct 10 18:01:55 2016 version 7.3(0)D1(1) power redundancy-mode redundant ``` ",1,timeout cannot save running config with nxos command or save yes in nxos config issue type bug report component name nxos command nxos config ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible roles roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment host ubuntu target nx osv summary cf title steps to reproduce inventory hosts nms mgt ip address nx osv spine ansible host nx osv spine ansible host structure passed as provider connections ssh defined in group vars nx osv connections yml and a symbolic link in roles nxos snmp defaults points to nx osv connections nxapi transport nxapi host ansible host ansible port port http port ansible user username admin password xxxxxxxx enable secret password auth pass xxxxxxxx http or https use ssl no validate certs https validate certs role nxos snmp include vars defaults os family connections yml name saving the running config nxos command provider connections nxapi commands copy running config startup config register result playbook name configuring snmp on nx os nx osv hosts nx osv roles nxos snmp expected results we should be able to save the modifications actual results task fatal failed changed false clierror syntax error while parsing copy running config startup config xml n n ncmd exec error n code failed true input copy running config startup config msg input cli command error output clierror syntax error while parsing copy running config startup config xml n n ncmd exec error n code input copy running config startup config msg input cli command error url no issue when configuring through the cli nx osv spine copy running config startup config copy complete nx osv spine show startup config command show startup config time mon oct startup config saved at mon oct version power redundancy mode redundant ,1 798,4415180075.0,IssuesEvent,2016-08-13 22:26:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Optional Parameter called Source for win_feature,feature_idea waiting_on_maintainer windows,"Issue Type: Feature Idea Component Name: win_feature Ansible Version: 1.9.2 Ansible Configuration: Stock install with Extra's modules. Environment: Windows 2012 R2 Summary: When trying to add a new feature like DotNet 3.5 core (feature is called 'NET-Framework-Core') to Win 2012 R2, it fails because 'NET-Framework-Core' did not exist on the box natively or was removed by a Windows Security Update. For it to install successfully, you need to be able to pass an argument called ""source"" to win_feature ie: D:\sources\sxs or \\IP\Share\sources\sxs so that it can pass it onto the cmdlet install-windowsfeature. Steps To Reproduce: On a Windows 2012 R2 Machine without DotNet 3.5 source, run the following: ansible -m win_feature -a ""name=NET-Framework-Core"" windowsvms Expected Results: TASK: [Install DotNet Framework 3.5 Feature] ********************************** ok: [site-06] ok: [site-05] changed: [site-07] Actual Results: TASK: [Install DotNet Framework 3.5 Feature] ********************************** ok: [site-06] ok: [site-05] failed: [site-07] => {""changed"": false, ""exitcode"": ""Failed"", ""failed"": true, ""feature_result"": [], ""restart_needed"": false, ""success"": false} msg: Failed to add feature If I get time tonight I will fix win_feature.ps1 and post the changes.",True,"Optional Parameter called Source for win_feature - Issue Type: Feature Idea Component Name: win_feature Ansible Version: 1.9.2 Ansible Configuration: Stock install with Extra's modules. Environment: Windows 2012 R2 Summary: When trying to add a new feature like DotNet 3.5 core (feature is called 'NET-Framework-Core') to Win 2012 R2, it fails because 'NET-Framework-Core' did not exist on the box natively or was removed by a Windows Security Update. For it to install successfully, you need to be able to pass an argument called ""source"" to win_feature ie: D:\sources\sxs or \\IP\Share\sources\sxs so that it can pass it onto the cmdlet install-windowsfeature. Steps To Reproduce: On a Windows 2012 R2 Machine without DotNet 3.5 source, run the following: ansible -m win_feature -a ""name=NET-Framework-Core"" windowsvms Expected Results: TASK: [Install DotNet Framework 3.5 Feature] ********************************** ok: [site-06] ok: [site-05] changed: [site-07] Actual Results: TASK: [Install DotNet Framework 3.5 Feature] ********************************** ok: [site-06] ok: [site-05] failed: [site-07] => {""changed"": false, ""exitcode"": ""Failed"", ""failed"": true, ""feature_result"": [], ""restart_needed"": false, ""success"": false} msg: Failed to add feature If I get time tonight I will fix win_feature.ps1 and post the changes.",1,optional parameter called source for win feature issue type feature idea component name win feature ansible version ansible configuration stock install with extra s modules environment windows summary when trying to add a new feature like dotnet core feature is called net framework core to win it fails because net framework core did not exist on the box natively or was removed by a windows security update for it to install successfully you need to be able to pass an argument called source to win feature ie d sources sxs or ip share sources sxs so that it can pass it onto the cmdlet install windowsfeature steps to reproduce on a windows machine without dotnet source run the following ansible m win feature a name net framework core windowsvms expected results task ok ok changed actual results task ok ok failed changed false exitcode failed failed true feature result restart needed false success false msg failed to add feature if i get time tonight i will fix win feature and post the changes ,1 1250,5313304391.0,IssuesEvent,2017-02-13 11:45:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,win_file: Failed to delete folder with broken symlink,affects_2.2 bug_report waiting_on_maintainer windows,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/windows/win_file.ps1 ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 7e0074263d) last updated 2016/10/02 10:15:29 (GMT +1100) lib/ansible/modules/core: (detached HEAD 18f710fe32) last updated 2016/10/02 10:15:41 (GMT +1100) lib/ansible/modules/extras: (detached HEAD a58e1d59c0) last updated 2016/10/02 10:15:53 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### OS / ENVIRONMENT Running: Bash for Ubuntu on Windows Managing: Windows Server 2012 R2 ##### SUMMARY When setting win_file state: absent for a directory that contains a symlink that is no longer pointing to a valid directory it fails with DirectoryNotFoundException ##### STEPS TO REPRODUCE Inventory.ini ``` [windows] host [windows:vars] ansible_user=.\User ansible_password=Password ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=ntlm ansible_winrm_server_cert_validation=ignore ``` Playbook.yml ``` - name: test windows hosts: windows tasks: - name: create test folder win_file: path: C:\temp\test state: directory - name: create symlink raw: CMD.EXE /C mklink /d C:\temp\link C:\temp\test - name: delete folder that link is pointing to win_file: path: C:\temp\test state: absent - name: delete parent folder containing broken link win_file: path: C:\temp state: absent ``` ##### EXPECTED RESULTS Expected C:\temp to be deleted correctly without exception ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: At C:\Users\User\AppData\Local\Temp\ansible-tmp-1475364906.0-88571632788975\win_file.ps1:317 char:9 + Remove-Item -Recurse -Force $fileinfo + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ fatal: [192.168.1.13]: FAILED! => { ""changed"": false, ""error_record"": { ""CategoryInfo"": { ""Activity"": ""Remove-Item"", ""Category"": 23, ""Reason"": ""DirectoryNotFoundException"", ""TargetName"": ""C:\\temp"", ""TargetType"": ""String"" }, ""ErrorDetails"": null, ""Exception"": { ""Data"": {}, ""HResult"": -2147024893, ""HelpLink"": null, ""InnerException"": null, ""Message"": ""Could not find a part of the path 'C:\\temp\\link'."", ""Source"": ""mscorlib"", ""StackTrace"": "" at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)\r\n at System.IO.FileSystemEnumerableIterator`1.CommonInit()\r\n at System.IO.FileSystemEnumerableIterator`1..ctor(String path, String originalUserPath, String searchPattern, SearchOption searchOption, SearchResultHandler`1 resultHandler, Boolean checkHost)\r\n at System.IO.FileSystemEnumerableFactory.CreateDirectoryInfoIterator(String path, String originalUserPath, String searchPattern, SearchOption searchOption)\r\n at Microsoft.PowerShell.Commands.FileSystemProvider.RemoveDirectoryInfoItem(DirectoryInfo directory, Boolean recurse, Boolean force, Boolean rootOfRemoval)\r\n at Microsoft.PowerShell.Commands.FileSystemProvider.RemoveDirectoryInfoItem(DirectoryInfo directory, Boolean recurse, Boolean force, Boolean rootOfRemoval)\r\n at Microsoft.PowerShell.Commands.FileSystemProvider.RemoveItem(String path, Boolean recurse)"", ""TargetSite"": { ""Attributes"": 147, ""CallingConvention"": 1, ""ContainsGenericParameters"": false, ""CustomAttributes"": ""[System.Security.SecurityCriticalAttribute()]"", ""DeclaringType"": ""System.IO.__Error, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"", ""IsAbstract"": false, ""IsAssembly"": true, ""IsConstructor"": false, ""IsFamily"": false, ""IsFamilyAndAssembly"": false, ""IsFamilyOrAssembly"": false, ""IsFinal"": false, ""IsGenericMethod"": false, ""IsGenericMethodDefinition"": false, ""IsHideBySig"": true, ""IsPrivate"": false, ""IsPublic"": false, ""IsSecurityCritical"": true, ""IsSecuritySafeCritical"": false, ""IsSecurityTransparent"": false, ""IsSpecialName"": false, ""IsStatic"": true, ""IsVirtual"": false, ""MemberType"": 8, ""MetadataToken"": 100680987, ""MethodHandle"": ""System.RuntimeMethodHandle"", ""MethodImplementationFlags"": 0, ""Module"": ""CommonLanguageRuntimeLibrary"", ""Name"": ""WinIOError"", ""ReflectedType"": ""System.IO.__Error, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"", ""ReturnParameter"": ""Void "", ""ReturnType"": ""void"", ""ReturnTypeCustomAttributes"": ""Void "" } }, ""FullyQualifiedErrorId"": ""RemoveItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand"", ""InvocationInfo"": { ""BoundParameters"": {}, ""CommandOrigin"": 1, ""DisplayScriptPosition"": null, ""ExpectingInput"": false, ""HistoryId"": 1, ""InvocationName"": ""Remove-Item"", ""Line"": "" Remove-Item -Recurse -Force $fileinfo\n"", ""MyCommand"": { ""CommandType"": 8, ""DefaultParameterSet"": ""Path"", ""Definition"": ""\r\nRemove-Item [-Path] [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] []\r\n\r\nRemove-Item -LiteralPath [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] []\r\n"", ""HelpFile"": ""Microsoft.PowerShell.Commands.Management.dll-Help.xml"", ""ImplementingType"": ""Microsoft.PowerShell.Commands.RemoveItemCommand"", ""Module"": ""Microsoft.PowerShell.Management"", ""ModuleName"": ""Microsoft.PowerShell.Management"", ""Name"": ""Remove-Item"", ""Noun"": ""Item"", ""Options"": 1, ""OutputType"": """", ""PSSnapIn"": null, ""ParameterSets"": ""[-Path] [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] [] -LiteralPath [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] []"", ""Parameters"": ""System.Collections.Generic.Dictionary`2[System.String,System.Management.Automation.ParameterMetadata]"", ""RemotingCapability"": 1, ""Verb"": ""Remove"", ""Visibility"": 0 }, ""OffsetInLine"": 9, ""PSCommandPath"": ""C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1"", ""PSScriptRoot"": ""C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975"", ""PipelineLength"": 0, ""PipelinePosition"": 0, ""PositionMessage"": ""At C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1:317 char:9\r\n+ Remove-Item -Recurse -Force $fileinfo\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"", ""ScriptLineNumber"": 317, ""ScriptName"": ""C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1"", ""UnboundArguments"": [] }, ""PSMessageDetails"": null, ""PipelineIterationInfo"": [], ""ScriptStackTrace"": ""at , C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1: line 317\r\nat , : line 4"", ""TargetObject"": ""C:\\temp"" }, ""failed"": true, ""invocation"": { ""module_name"": ""win_file"" }, ""msg"": ""Could not find a part of the path 'C:\\temp\\link'."" } ``` ",True,"win_file: Failed to delete folder with broken symlink - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible-modules-core/windows/win_file.ps1 ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 7e0074263d) last updated 2016/10/02 10:15:29 (GMT +1100) lib/ansible/modules/core: (detached HEAD 18f710fe32) last updated 2016/10/02 10:15:41 (GMT +1100) lib/ansible/modules/extras: (detached HEAD a58e1d59c0) last updated 2016/10/02 10:15:53 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### OS / ENVIRONMENT Running: Bash for Ubuntu on Windows Managing: Windows Server 2012 R2 ##### SUMMARY When setting win_file state: absent for a directory that contains a symlink that is no longer pointing to a valid directory it fails with DirectoryNotFoundException ##### STEPS TO REPRODUCE Inventory.ini ``` [windows] host [windows:vars] ansible_user=.\User ansible_password=Password ansible_connection=winrm ansible_port=5986 ansible_winrm_transport=ntlm ansible_winrm_server_cert_validation=ignore ``` Playbook.yml ``` - name: test windows hosts: windows tasks: - name: create test folder win_file: path: C:\temp\test state: directory - name: create symlink raw: CMD.EXE /C mklink /d C:\temp\link C:\temp\test - name: delete folder that link is pointing to win_file: path: C:\temp\test state: absent - name: delete parent folder containing broken link win_file: path: C:\temp state: absent ``` ##### EXPECTED RESULTS Expected C:\temp to be deleted correctly without exception ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: At C:\Users\User\AppData\Local\Temp\ansible-tmp-1475364906.0-88571632788975\win_file.ps1:317 char:9 + Remove-Item -Recurse -Force $fileinfo + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ fatal: [192.168.1.13]: FAILED! => { ""changed"": false, ""error_record"": { ""CategoryInfo"": { ""Activity"": ""Remove-Item"", ""Category"": 23, ""Reason"": ""DirectoryNotFoundException"", ""TargetName"": ""C:\\temp"", ""TargetType"": ""String"" }, ""ErrorDetails"": null, ""Exception"": { ""Data"": {}, ""HResult"": -2147024893, ""HelpLink"": null, ""InnerException"": null, ""Message"": ""Could not find a part of the path 'C:\\temp\\link'."", ""Source"": ""mscorlib"", ""StackTrace"": "" at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)\r\n at System.IO.FileSystemEnumerableIterator`1.CommonInit()\r\n at System.IO.FileSystemEnumerableIterator`1..ctor(String path, String originalUserPath, String searchPattern, SearchOption searchOption, SearchResultHandler`1 resultHandler, Boolean checkHost)\r\n at System.IO.FileSystemEnumerableFactory.CreateDirectoryInfoIterator(String path, String originalUserPath, String searchPattern, SearchOption searchOption)\r\n at Microsoft.PowerShell.Commands.FileSystemProvider.RemoveDirectoryInfoItem(DirectoryInfo directory, Boolean recurse, Boolean force, Boolean rootOfRemoval)\r\n at Microsoft.PowerShell.Commands.FileSystemProvider.RemoveDirectoryInfoItem(DirectoryInfo directory, Boolean recurse, Boolean force, Boolean rootOfRemoval)\r\n at Microsoft.PowerShell.Commands.FileSystemProvider.RemoveItem(String path, Boolean recurse)"", ""TargetSite"": { ""Attributes"": 147, ""CallingConvention"": 1, ""ContainsGenericParameters"": false, ""CustomAttributes"": ""[System.Security.SecurityCriticalAttribute()]"", ""DeclaringType"": ""System.IO.__Error, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"", ""IsAbstract"": false, ""IsAssembly"": true, ""IsConstructor"": false, ""IsFamily"": false, ""IsFamilyAndAssembly"": false, ""IsFamilyOrAssembly"": false, ""IsFinal"": false, ""IsGenericMethod"": false, ""IsGenericMethodDefinition"": false, ""IsHideBySig"": true, ""IsPrivate"": false, ""IsPublic"": false, ""IsSecurityCritical"": true, ""IsSecuritySafeCritical"": false, ""IsSecurityTransparent"": false, ""IsSpecialName"": false, ""IsStatic"": true, ""IsVirtual"": false, ""MemberType"": 8, ""MetadataToken"": 100680987, ""MethodHandle"": ""System.RuntimeMethodHandle"", ""MethodImplementationFlags"": 0, ""Module"": ""CommonLanguageRuntimeLibrary"", ""Name"": ""WinIOError"", ""ReflectedType"": ""System.IO.__Error, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"", ""ReturnParameter"": ""Void "", ""ReturnType"": ""void"", ""ReturnTypeCustomAttributes"": ""Void "" } }, ""FullyQualifiedErrorId"": ""RemoveItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand"", ""InvocationInfo"": { ""BoundParameters"": {}, ""CommandOrigin"": 1, ""DisplayScriptPosition"": null, ""ExpectingInput"": false, ""HistoryId"": 1, ""InvocationName"": ""Remove-Item"", ""Line"": "" Remove-Item -Recurse -Force $fileinfo\n"", ""MyCommand"": { ""CommandType"": 8, ""DefaultParameterSet"": ""Path"", ""Definition"": ""\r\nRemove-Item [-Path] [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] []\r\n\r\nRemove-Item -LiteralPath [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] []\r\n"", ""HelpFile"": ""Microsoft.PowerShell.Commands.Management.dll-Help.xml"", ""ImplementingType"": ""Microsoft.PowerShell.Commands.RemoveItemCommand"", ""Module"": ""Microsoft.PowerShell.Management"", ""ModuleName"": ""Microsoft.PowerShell.Management"", ""Name"": ""Remove-Item"", ""Noun"": ""Item"", ""Options"": 1, ""OutputType"": """", ""PSSnapIn"": null, ""ParameterSets"": ""[-Path] [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] [] -LiteralPath [-Filter ] [-Include ] [-Exclude ] [-Recurse] [-Force] [-Credential ] [-WhatIf] [-Confirm] [-UseTransaction] []"", ""Parameters"": ""System.Collections.Generic.Dictionary`2[System.String,System.Management.Automation.ParameterMetadata]"", ""RemotingCapability"": 1, ""Verb"": ""Remove"", ""Visibility"": 0 }, ""OffsetInLine"": 9, ""PSCommandPath"": ""C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1"", ""PSScriptRoot"": ""C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975"", ""PipelineLength"": 0, ""PipelinePosition"": 0, ""PositionMessage"": ""At C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1:317 char:9\r\n+ Remove-Item -Recurse -Force $fileinfo\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"", ""ScriptLineNumber"": 317, ""ScriptName"": ""C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1"", ""UnboundArguments"": [] }, ""PSMessageDetails"": null, ""PipelineIterationInfo"": [], ""ScriptStackTrace"": ""at , C:\\Users\\User\\AppData\\Local\\Temp\\ansible-tmp-1475364906.0-88571632788975\\win_file.ps1: line 317\r\nat , : line 4"", ""TargetObject"": ""C:\\temp"" }, ""failed"": true, ""invocation"": { ""module_name"": ""win_file"" }, ""msg"": ""Could not find a part of the path 'C:\\temp\\link'."" } ``` ",1,win file failed to delete folder with broken symlink issue type bug report component name ansible modules core windows win file ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path os environment running bash for ubuntu on windows managing windows server summary when setting win file state absent for a directory that contains a symlink that is no longer pointing to a valid directory it fails with directorynotfoundexception steps to reproduce inventory ini host ansible user user ansible password password ansible connection winrm ansible port ansible winrm transport ntlm ansible winrm server cert validation ignore playbook yml name test windows hosts windows tasks name create test folder win file path c temp test state directory name create symlink raw cmd exe c mklink d c temp link c temp test name delete folder that link is pointing to win file path c temp test state absent name delete parent folder containing broken link win file path c temp state absent expected results expected c temp to be deleted correctly without exception actual results an exception occurred during task execution the full traceback is at c users user appdata local temp ansible tmp win file char remove item recurse force fileinfo fatal failed changed false error record categoryinfo activity remove item category reason directorynotfoundexception targetname c temp targettype string errordetails null exception data hresult helplink null innerexception null message could not find a part of the path c temp link source mscorlib stacktrace at system io error winioerror errorcode string maybefullpath r n at system io filesystemenumerableiterator commoninit r n at system io filesystemenumerableiterator ctor string path string originaluserpath string searchpattern searchoption searchoption searchresulthandler resulthandler boolean checkhost r n at system io filesystemenumerablefactory createdirectoryinfoiterator string path string originaluserpath string searchpattern searchoption searchoption r n at microsoft powershell commands filesystemprovider removedirectoryinfoitem directoryinfo directory boolean recurse boolean force boolean rootofremoval r n at microsoft powershell commands filesystemprovider removedirectoryinfoitem directoryinfo directory boolean recurse boolean force boolean rootofremoval r n at microsoft powershell commands filesystemprovider removeitem string path boolean recurse targetsite attributes callingconvention containsgenericparameters false customattributes declaringtype system io error mscorlib version culture neutral publickeytoken isabstract false isassembly true isconstructor false isfamily false isfamilyandassembly false isfamilyorassembly false isfinal false isgenericmethod false isgenericmethoddefinition false ishidebysig true isprivate false ispublic false issecuritycritical true issecuritysafecritical false issecuritytransparent false isspecialname false isstatic true isvirtual false membertype metadatatoken methodhandle system runtimemethodhandle methodimplementationflags module commonlanguageruntimelibrary name winioerror reflectedtype system io error mscorlib version culture neutral publickeytoken returnparameter void returntype void returntypecustomattributes void fullyqualifiederrorid removeitemioerror microsoft powershell commands removeitemcommand invocationinfo boundparameters commandorigin displayscriptposition null expectinginput false historyid invocationname remove item line remove item recurse force fileinfo n mycommand commandtype defaultparameterset path definition r nremove item r n r nremove item literalpath r n helpfile microsoft powershell commands management dll help xml implementingtype microsoft powershell commands removeitemcommand module microsoft powershell management modulename microsoft powershell management name remove item noun item options outputtype pssnapin null parametersets literalpath parameters system collections generic dictionary remotingcapability verb remove visibility offsetinline pscommandpath c users user appdata local temp ansible tmp win file psscriptroot c users user appdata local temp ansible tmp pipelinelength pipelineposition positionmessage at c users user appdata local temp ansible tmp win file char r n remove item recurse force fileinfo r n scriptlinenumber scriptname c users user appdata local temp ansible tmp win file unboundarguments psmessagedetails null pipelineiterationinfo scriptstacktrace at c users user appdata local temp ansible tmp win file line r nat line targetobject c temp failed true invocation module name win file msg could not find a part of the path c temp link ,1 1872,6577498977.0,IssuesEvent,2017-09-12 01:20:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vpc module erroneously recreates VPCs when passing loosely defined CIDR blocks,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc module ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17. ##### SUMMARY When creating VPCs, AWS will automatically convert your subnet CIDR blocks to it's strictest representation (10.20.30.0/16 will be converted to 10.20.0.0/16), however, when performing checks (beginning line 193 of ec2_vpc.py) to determine if the VPC needs to be modified, Ansible uses the representation provided by the user, which can differ from the representation returned by AWS, in this case, a new VPC will be erroneously created for each subsequent playbook run. ##### STEPS TO REPRODUCE Save the following playbook as ec2_vpc-test.yml and run it with ansible-playbook ec2_vpc-test.yml ``` --- - hosts: localhost tasks: - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" ``` ##### EXPECTED RESULTS I expect that only one VPC will be created regardless of how many times the playbook is run ##### ACTUAL RESULTS Two new, identical VPCs are created every time this playbook is run, despite no playbook changes being made. ``` [dwood@dawood-arch ansible]$ ansible-playbook ec2_vpc-test.yml [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAY [localhost] *************************************************************** TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 [dwood@dawood-arch ansible]$ ``` ",True,"ec2_vpc module erroneously recreates VPCs when passing loosely defined CIDR blocks - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc module ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17. ##### SUMMARY When creating VPCs, AWS will automatically convert your subnet CIDR blocks to it's strictest representation (10.20.30.0/16 will be converted to 10.20.0.0/16), however, when performing checks (beginning line 193 of ec2_vpc.py) to determine if the VPC needs to be modified, Ansible uses the representation provided by the user, which can differ from the representation returned by AWS, in this case, a new VPC will be erroneously created for each subsequent playbook run. ##### STEPS TO REPRODUCE Save the following playbook as ec2_vpc-test.yml and run it with ansible-playbook ec2_vpc-test.yml ``` --- - hosts: localhost tasks: - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" ``` ##### EXPECTED RESULTS I expect that only one VPC will be created regardless of how many times the playbook is run ##### ACTUAL RESULTS Two new, identical VPCs are created every time this playbook is run, despite no playbook changes being made. ``` [dwood@dawood-arch ansible]$ ansible-playbook ec2_vpc-test.yml [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAY [localhost] *************************************************************** TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 [dwood@dawood-arch ansible]$ ``` ",1, vpc module erroneously recreates vpcs when passing loosely defined cidr blocks issue type bug report component name vpc module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment host os is arch linux i m building infrastructure in aws using boto version and aws cli version summary when creating vpcs aws will automatically convert your subnet cidr blocks to it s strictest representation will be converted to however when performing checks beginning line of vpc py to determine if the vpc needs to be modified ansible uses the representation provided by the user which can differ from the representation returned by aws in this case a new vpc will be erroneously created for each subsequent playbook run steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used save the following playbook as vpc test yml and run it with ansible playbook vpc test yml hosts localhost tasks name create vpc local action module vpc state present cidr block resource tags name vpc subnet test region eu west name create vpc local action module vpc state present cidr block resource tags name vpc subnet test region eu west expected results i expect that only one vpc will be created regardless of how many times the playbook is run actual results two new identical vpcs are created every time this playbook is run despite no playbook changes being made ansible playbook vpc test yml host file not found etc ansible hosts provided hosts list is empty only localhost is available play task changed task changed play recap localhost ok changed unreachable failed ,1 1779,6575820717.0,IssuesEvent,2017-09-11 17:27:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mysql_user to change password fails with Ubuntu 16.04 on MariaDB,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Using Ansible Tower 3.0.2 ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY mysql_user to change password fails with Ubuntu 16.04 on MariaDB. Exactly the same scripts work with Ubuntu 14.04. It seems to try to run the wrong command. ##### STEPS TO REPRODUCE ``` 1. Create a host running Ubuntu 16.04 2. Install MariaDB Galera Server 10.0 3. Run the following command: mysql_user: name=debian-sys-maint host=localhost password={{ debian_dbpassword }} state=present login_user=root login_password={{ root_dbpassword }} ``` ##### EXPECTED RESULTS The password for ""debian-sys-maint"" is changed. ##### ACTUAL RESULTS ``` TASK [mariadb-cluster : Set common debian password] **************************** task path: /var/lib/awx/projects/_1137__ansible_obelisk/roles/mariadb-cluster/tasks/main.yml:13 <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `"" && echo ansible-tmp-1475559634.15-95308147439585=""` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `"" ) && sleep 0'""'""'' <172.24.32.39> PUT /tmp/tmpjR2PoI TO /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user <172.24.32.39> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r '[172.24.32.39]' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '""'""'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/ /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user && sleep 0'""'""'' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r -tt 172.24.32.39 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-mxijegtsjwtriiriwsiwjlltyyadswhu; LANG=en_AU.UTF-8 LC_ALL=en_AU.UTF-8 LC_MESSAGES=en_AU.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user; rm -rf ""/home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [Larry Database1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": false, ""config_file"": ""/root/.my.cnf"", ""connect_timeout"": 30, ""encrypted"": false, ""host"": ""localhost"", ""host_all"": false, ""login_host"": ""localhost"", ""login_password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": ""root"", ""name"": ""debian-sys-maint"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""sql_log_bin"": true, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""debian-sys-maint""}, ""module_name"": ""mysql_user""}, ""msg"": ""(1064, \""You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''' at line 1\"")""} ``` ",True,"mysql_user to change password fails with Ubuntu 16.04 on MariaDB - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Using Ansible Tower 3.0.2 ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY mysql_user to change password fails with Ubuntu 16.04 on MariaDB. Exactly the same scripts work with Ubuntu 14.04. It seems to try to run the wrong command. ##### STEPS TO REPRODUCE ``` 1. Create a host running Ubuntu 16.04 2. Install MariaDB Galera Server 10.0 3. Run the following command: mysql_user: name=debian-sys-maint host=localhost password={{ debian_dbpassword }} state=present login_user=root login_password={{ root_dbpassword }} ``` ##### EXPECTED RESULTS The password for ""debian-sys-maint"" is changed. ##### ACTUAL RESULTS ``` TASK [mariadb-cluster : Set common debian password] **************************** task path: /var/lib/awx/projects/_1137__ansible_obelisk/roles/mariadb-cluster/tasks/main.yml:13 <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `"" && echo ansible-tmp-1475559634.15-95308147439585=""` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `"" ) && sleep 0'""'""'' <172.24.32.39> PUT /tmp/tmpjR2PoI TO /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user <172.24.32.39> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r '[172.24.32.39]' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '""'""'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/ /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user && sleep 0'""'""'' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r -tt 172.24.32.39 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-mxijegtsjwtriiriwsiwjlltyyadswhu; LANG=en_AU.UTF-8 LC_ALL=en_AU.UTF-8 LC_MESSAGES=en_AU.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user; rm -rf ""/home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' fatal: [Larry Database1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append_privs"": false, ""check_implicit_admin"": false, ""config_file"": ""/root/.my.cnf"", ""connect_timeout"": 30, ""encrypted"": false, ""host"": ""localhost"", ""host_all"": false, ""login_host"": ""localhost"", ""login_password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""login_port"": 3306, ""login_unix_socket"": null, ""login_user"": ""root"", ""name"": ""debian-sys-maint"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""priv"": null, ""sql_log_bin"": true, ""ssl_ca"": null, ""ssl_cert"": null, ""ssl_key"": null, ""state"": ""present"", ""update_password"": ""always"", ""user"": ""debian-sys-maint""}, ""module_name"": ""mysql_user""}, ""msg"": ""(1064, \""You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''' at line 1\"")""} ``` ",1,mysql user to change password fails with ubuntu on mariadb issue type bug report component name mysql user ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration using ansible tower os environment ubuntu summary mysql user to change password fails with ubuntu on mariadb exactly the same scripts work with ubuntu it seems to try to run the wrong command steps to reproduce create a host running ubuntu install mariadb galera server run the following command mysql user name debian sys maint host localhost password debian dbpassword state present login user root login password root dbpassword expected results the password for debian sys maint is changed actual results task task path var lib awx projects ansible obelisk roles mariadb cluster tasks main yml establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ubuntu ansible tmp ansible tmp mysql user ssh exec sftp b c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r bin sh c chmod u x home ubuntu ansible tmp ansible tmp home ubuntu ansible tmp ansible tmp mysql user sleep establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success mxijegtsjwtriiriwsiwjlltyyadswhu lang en au utf lc all en au utf lc messages en au utf usr bin python home ubuntu ansible tmp ansible tmp mysql user rm rf home ubuntu ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args append privs false check implicit admin false config file root my cnf connect timeout encrypted false host localhost host all false login host localhost login password value specified in no log parameter login port login unix socket null login user root name debian sys maint password value specified in no log parameter priv null sql log bin true ssl ca null ssl cert null ssl key null state present update password always user debian sys maint module name mysql user msg you have an error in your sql syntax check the manual that corresponds to your mariadb server version for the right syntax to use near at line ,1 1680,6574141460.0,IssuesEvent,2017-09-11 11:40:25,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,template module creates an acutal new line when reading (m?)\n ,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME template ##### ANSIBLE VERSION 2.0 and higher ##### CONFIGURATION [ssh_connection] control_path = %(directory)s/%%C ##### OS / ENVIRONMENT Mac OS X 10.11.6 Centos 6.x, 7.x ##### SUMMARY In the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing ""(?m)\n"" . The output generated by the template module in versions 2.0 and later, treats the \n as actual line break. Where as versions up to 1.9.6 retains the literal ""(?m)\n"" without replacing the \n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x. Any way we can work around this issue? Thank you for your help. ##### STEPS TO REPRODUCE Our execution flow is probably not the nicest - we want to reengineer it soon. Basic steps: 1. Run a shell script with ansible-playbook command that pass in an env variable with (?m)\n literal. 2. Playbook calls a main yaml file and assigns shell environment var to a included task yaml file. 3. The task yaml file invokes the template module. In the snippet below I stripped out other lines/vars for clarity. main shell ``` set GROK_PATTERN_GENERAL_ERROR_PG=""%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"" ansible-playbook -i ../common/host.inventory \ -${VERBOSE} \ t.yml \ ${CHECK_ONLY} \ --extra-vars ""hosts='${HOST}' xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}' "" ``` t.yml ``` --- - hosts: 127.0.0.1 connection: local tasks: - include_vars: ../common/defaults/main.yml - name: generate logstash kafka logscan filter config file include: tasks/t.yml vars: logstash_grok_general_error: ""{{xlogstash_grok_general_error}}"" ``` tasks/t.yml ``` --- - name: generate logstash kafka logscan filter config file template: src=../common/templates/my.conf.j2 dest=""./500-filter.conf"" ``` my.conf.j2 ``` grok { break_on_match => ""true"" match => [ ""message"", ""{{logstash_grok_general_error}}"" ] } ``` Note the (?m)\n are still on the same line. ##### EXPECTED RESULTS ``` grok { break_on_match => ""true"" match => [ ""message"", ""%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"" ] } ``` ##### ACTUAL RESULTS Note (?m)\n now has the \n as actual line break. ``` grok { break_on_match => ""true"" match => [ ""message"", ""%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m) %{USER:logerror}%{GREEDYDATA})"" ] } ``` ",True,"template module creates an acutal new line when reading (m?)\n - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME template ##### ANSIBLE VERSION 2.0 and higher ##### CONFIGURATION [ssh_connection] control_path = %(directory)s/%%C ##### OS / ENVIRONMENT Mac OS X 10.11.6 Centos 6.x, 7.x ##### SUMMARY In the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing ""(?m)\n"" . The output generated by the template module in versions 2.0 and later, treats the \n as actual line break. Where as versions up to 1.9.6 retains the literal ""(?m)\n"" without replacing the \n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x. Any way we can work around this issue? Thank you for your help. ##### STEPS TO REPRODUCE Our execution flow is probably not the nicest - we want to reengineer it soon. Basic steps: 1. Run a shell script with ansible-playbook command that pass in an env variable with (?m)\n literal. 2. Playbook calls a main yaml file and assigns shell environment var to a included task yaml file. 3. The task yaml file invokes the template module. In the snippet below I stripped out other lines/vars for clarity. main shell ``` set GROK_PATTERN_GENERAL_ERROR_PG=""%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"" ansible-playbook -i ../common/host.inventory \ -${VERBOSE} \ t.yml \ ${CHECK_ONLY} \ --extra-vars ""hosts='${HOST}' xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}' "" ``` t.yml ``` --- - hosts: 127.0.0.1 connection: local tasks: - include_vars: ../common/defaults/main.yml - name: generate logstash kafka logscan filter config file include: tasks/t.yml vars: logstash_grok_general_error: ""{{xlogstash_grok_general_error}}"" ``` tasks/t.yml ``` --- - name: generate logstash kafka logscan filter config file template: src=../common/templates/my.conf.j2 dest=""./500-filter.conf"" ``` my.conf.j2 ``` grok { break_on_match => ""true"" match => [ ""message"", ""{{logstash_grok_general_error}}"" ] } ``` Note the (?m)\n are still on the same line. ##### EXPECTED RESULTS ``` grok { break_on_match => ""true"" match => [ ""message"", ""%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"" ] } ``` ##### ACTUAL RESULTS Note (?m)\n now has the \n as actual line break. ``` grok { break_on_match => ""true"" match => [ ""message"", ""%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m) %{USER:logerror}%{GREEDYDATA})"" ] } ``` ",1,template module creates an acutal new line when reading m n issue type bug report component name template ansible version and higher configuration control path directory s c os environment mac os x centos x x summary in the input file we substitute a variable with an environment variable that has a line string that contains a grok expression containing m n the output generated by the template module in versions and later treats the n as actual line break where as versions up to retains the literal m n without replacing the n with an actual line break we see the line break after we upgraded the ansible version to x any way we can work around this issue thank you for your help steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used our execution flow is probably not the nicest we want to reengineer it soon basic steps run a shell script with ansible playbook command that pass in an env variable with m n literal playbook calls a main yaml file and assigns shell environment var to a included task yaml file the task yaml file invokes the template module in the snippet below i stripped out other lines vars for clarity main shell set grok pattern general error pg timestamp error user classname greedydata m n user logerror greedydata ansible playbook i common host inventory verbose t yml check only extra vars hosts host xlogstash grok general error grok pattern general error pg t yml hosts connection local tasks include vars common defaults main yml name generate logstash kafka logscan filter config file include tasks t yml vars logstash grok general error xlogstash grok general error tasks t yml name generate logstash kafka logscan filter config file template src common templates my conf dest filter conf my conf grok break on match true match message logstash grok general error note the m n are still on the same line expected results grok break on match true match message timestamp error user classname greedydata m n user logerror greedydata actual results note m n now has the n as actual line break grok break on match true match message timestamp error user classname greedydata m user logerror greedydata ,1 1757,6574984203.0,IssuesEvent,2017-09-11 14:41:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Default deployment mode is inconistent with Azure ARM tools,affects_2.3 azure bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME https://docs.ansible.com/ansible/azure_rm_deployment_module.html ##### SUMMARY The default mode for deployment should be incremental. This is the default mode of the native azure tools. https://azure.microsoft.com/en-us/documentation/articles/resource-group-template-deploy/#incremental-and-complete-deployments It seems dangerous to choose a destructive mode as the default. NOTE - Please Do not close as being in the wrong repo. The module docs say it is an extra but my ticket there was closed - https://github.com/ansible/ansible-modules-extras/issues/3189 ",True,"Default deployment mode is inconistent with Azure ARM tools - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME https://docs.ansible.com/ansible/azure_rm_deployment_module.html ##### SUMMARY The default mode for deployment should be incremental. This is the default mode of the native azure tools. https://azure.microsoft.com/en-us/documentation/articles/resource-group-template-deploy/#incremental-and-complete-deployments It seems dangerous to choose a destructive mode as the default. NOTE - Please Do not close as being in the wrong repo. The module docs say it is an extra but my ticket there was closed - https://github.com/ansible/ansible-modules-extras/issues/3189 ",1,default deployment mode is inconistent with azure arm tools issue type bug report component name summary the default mode for deployment should be incremental this is the default mode of the native azure tools it seems dangerous to choose a destructive mode as the default note please do not close as being in the wrong repo the module docs say it is an extra but my ticket there was closed ,1 1851,6577396140.0,IssuesEvent,2017-09-12 00:37:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Show actual status code for uri,affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME uri ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION none ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When using uri and a status code other than 200 is returned, the actual status code returned is not reported. The message is ""Status code was not [200]"". Sometimes it is very difficult to determine the actual status code returned by other means. It would make debugging easier if the actual error code was included in the error message. ",True,"Show actual status code for uri - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME uri ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION none ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When using uri and a status code other than 200 is returned, the actual status code returned is not reported. The message is ""Status code was not [200]"". Sometimes it is very difficult to determine the actual status code returned by other means. It would make debugging easier if the actual error code was included in the error message. ",1,show actual status code for uri issue type feature idea component name uri ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none os environment ubuntu summary when using uri and a status code other than is returned the actual status code returned is not reported the message is status code was not sometimes it is very difficult to determine the actual status code returned by other means it would make debugging easier if the actual error code was included in the error message ,1 1784,6575850595.0,IssuesEvent,2017-09-11 17:34:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Validation with visudo does not work for lineinfile if ,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ``` CentOS Linux release 7.2.1511 (Core) Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When trying to create or modify a file in _/etc/sudoers.d_ by using the `lineinfile` module the validation with visudo fails because a temporary file is not found. http://docs.ansible.com/ansible/lineinfile_module.html ##### STEPS TO REPRODUCE ``` - name: Setup sudoers permissions lineinfile: dest=/etc/sudoers.d/icinga2 create=yes state=present line='icinga ALL=(ALL) NOPASSWD:/usr/bin/find' validate='visudo -cf %s' ``` ##### EXPECTED RESULTS A file created under _/etc/sudoers.d/icinga2_ with the content `icinga ALL=(ALL) NOPASSWD:/usr/bin/find`which passed validation. ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""cmd"": ""visudo -cf /tmp/tmpSBsM5A"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2 ``` ",True,"Validation with visudo does not work for lineinfile if - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ``` CentOS Linux release 7.2.1511 (Core) Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When trying to create or modify a file in _/etc/sudoers.d_ by using the `lineinfile` module the validation with visudo fails because a temporary file is not found. http://docs.ansible.com/ansible/lineinfile_module.html ##### STEPS TO REPRODUCE ``` - name: Setup sudoers permissions lineinfile: dest=/etc/sudoers.d/icinga2 create=yes state=present line='icinga ALL=(ALL) NOPASSWD:/usr/bin/find' validate='visudo -cf %s' ``` ##### EXPECTED RESULTS A file created under _/etc/sudoers.d/icinga2_ with the content `icinga ALL=(ALL) NOPASSWD:/usr/bin/find`which passed validation. ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""cmd"": ""visudo -cf /tmp/tmpSBsM5A"", ""failed"": true, ""msg"": ""[Errno 2] No such file or directory"", ""rc"": 2 ``` ",1,validation with visudo does not work for lineinfile if issue type bug report component name lineinfile ansible version ansible configuration n a os environment centos linux release core linux smp thu may utc gnu linux summary when trying to create or modify a file in etc sudoers d by using the lineinfile module the validation with visudo fails because a temporary file is not found steps to reproduce name setup sudoers permissions lineinfile dest etc sudoers d create yes state present line icinga all all nopasswd usr bin find validate visudo cf s expected results a file created under etc sudoers d with the content icinga all all nopasswd usr bin find which passed validation actual results failed changed false cmd visudo cf tmp failed true msg no such file or directory rc ,1 1674,6574094254.0,IssuesEvent,2017-09-11 11:27:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_image_facts: unable to deal with image IDs,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_image_facts` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to inspect an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ docker inspect $ diff -q <(docker inspect alpine) <(docker inspect sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3) $ echo $? 0 $ ansible -m docker_image_facts -a name=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_image_facts -a name=alpine localhost`. ``` localhost | SUCCESS => { ""changed"": false, ""images"": [ { ""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": { ""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": null, ""Domainname"": """", ""Entrypoint"": null, ""Env"": [ ""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"" ], ""Hostname"": ""1d811a9194c4"", ""Image"": """", ""Labels"": null, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """" }, ""Container"": ""1d811a9194c47475510bc53700001c32f2b0eb8e3aca0914c5424109c0cd2056"", ""ContainerConfig"": { ""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [ ""/bin/sh"", ""-c"", ""#(nop) ADD file:7afbc23fda8b0b3872623c16af8e3490b2cee951aed14b3794389c2f946cc8c7 in / "" ], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [ ""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"" ], ""Hostname"": ""1d811a9194c4"", ""Image"": """", ""Labels"": null, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """" }, ""Created"": ""2016-10-18T20:31:22.321427771Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": { ""Data"": { ""RootDir"": ""/var/lib/docker/overlay/7be156a62962247279d73aaafb6ccb6b9c3d25d188641fef7f447ea88563aa4f/root"" }, ""Name"": ""overlay"" }, ""Id"": ""sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [ ""alpine@sha256:1354db23ff5478120c980eca1611a51c9f2b88b61f24283ee8200bf9a54f2e5c"" ], ""RepoTags"": [ ""alpine:latest"" ], ""RootFS"": { ""Layers"": [ ""sha256:011b303988d241a4ae28a6b82b0d8262751ef02910f0ae2265cb637504b72e36"" ], ""Type"": ""layers"" }, ""Size"": 4799225, ""VirtualSize"": 4799225 } ] } ``` ##### ACTUAL RESULTS Instead no matching image is returned. ``` localhost | SUCCESS => { ""changed"": false, ""images"": [] } ``` ",True,"docker_image_facts: unable to deal with image IDs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_image_facts` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to inspect an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ docker inspect $ diff -q <(docker inspect alpine) <(docker inspect sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3) $ echo $? 0 $ ansible -m docker_image_facts -a name=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_image_facts -a name=alpine localhost`. ``` localhost | SUCCESS => { ""changed"": false, ""images"": [ { ""Architecture"": ""amd64"", ""Author"": """", ""Comment"": """", ""Config"": { ""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": null, ""Domainname"": """", ""Entrypoint"": null, ""Env"": [ ""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"" ], ""Hostname"": ""1d811a9194c4"", ""Image"": """", ""Labels"": null, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """" }, ""Container"": ""1d811a9194c47475510bc53700001c32f2b0eb8e3aca0914c5424109c0cd2056"", ""ContainerConfig"": { ""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [ ""/bin/sh"", ""-c"", ""#(nop) ADD file:7afbc23fda8b0b3872623c16af8e3490b2cee951aed14b3794389c2f946cc8c7 in / "" ], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [ ""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"" ], ""Hostname"": ""1d811a9194c4"", ""Image"": """", ""Labels"": null, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": """" }, ""Created"": ""2016-10-18T20:31:22.321427771Z"", ""DockerVersion"": ""1.12.1"", ""GraphDriver"": { ""Data"": { ""RootDir"": ""/var/lib/docker/overlay/7be156a62962247279d73aaafb6ccb6b9c3d25d188641fef7f447ea88563aa4f/root"" }, ""Name"": ""overlay"" }, ""Id"": ""sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3"", ""Os"": ""linux"", ""Parent"": """", ""RepoDigests"": [ ""alpine@sha256:1354db23ff5478120c980eca1611a51c9f2b88b61f24283ee8200bf9a54f2e5c"" ], ""RepoTags"": [ ""alpine:latest"" ], ""RootFS"": { ""Layers"": [ ""sha256:011b303988d241a4ae28a6b82b0d8262751ef02910f0ae2265cb637504b72e36"" ], ""Type"": ""layers"" }, ""Size"": 4799225, ""VirtualSize"": 4799225 } ] } ``` ##### ACTUAL RESULTS Instead no matching image is returned. ``` localhost | SUCCESS => { ""changed"": false, ""images"": [] } ``` ",1,docker image facts unable to deal with image ids issue type bug report component name docker image facts ansible version ansible config file home schwarz code infrastructure ansible cfg configured module search path default w o overrides configuration n a os environment debian gnu linux summary docker allows addressing images by id ansible should do the same otherwise it s impossible to inspect an unnamed image steps to reproduce sh docker pull alpine docker inspect format id alpine docker inspect diff q docker inspect alpine docker inspect echo ansible m docker image facts a name localhost expected results the output should be the same as from ansible m docker image facts a name alpine localhost localhost success changed false images architecture author comment config attachstderr false attachstdin false attachstdout false cmd null domainname entrypoint null env path usr local sbin usr local bin usr sbin usr bin sbin bin hostname image labels null onbuild null openstdin false stdinonce false tty false user volumes null workingdir container containerconfig attachstderr false attachstdin false attachstdout false cmd bin sh c nop add file in domainname entrypoint null env path usr local sbin usr local bin usr sbin usr bin sbin bin hostname image labels null onbuild null openstdin false stdinonce false tty false user volumes null workingdir created dockerversion graphdriver data rootdir var lib docker overlay root name overlay id os linux parent repodigests alpine repotags alpine latest rootfs layers type layers size virtualsize actual results instead no matching image is returned localhost success changed false images ,1 1682,6574154006.0,IssuesEvent,2017-09-11 11:43:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,EC2_ASG: Support NewInstancesProtectedFromScaleIn parameter,affects_2.3 aws cloud feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = configured module search path = Default w/o overrides ``` ##### SUMMARY see http://boto3.readthedocs.io/en/latest/reference/services/autoscaling.html#AutoScaling.Client.create_auto_scaling_group parameter NewInstancesProtectedFromScaleIn is currently unsupported ##### STEPS TO REPRODUCE ``` - ec2_asg: name: myasg launch_config_name: my_new_lc health_check_period: 60 health_check_type: ELB min_size: 5 max_size: 5 desired_capacity: 5 region: us-east-1 new_instances_protected_from_scale_in: true | false ``` ##### EXPECTED RESULTS param to be taken into account",True,"EC2_ASG: Support NewInstancesProtectedFromScaleIn parameter - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = configured module search path = Default w/o overrides ``` ##### SUMMARY see http://boto3.readthedocs.io/en/latest/reference/services/autoscaling.html#AutoScaling.Client.create_auto_scaling_group parameter NewInstancesProtectedFromScaleIn is currently unsupported ##### STEPS TO REPRODUCE ``` - ec2_asg: name: myasg launch_config_name: my_new_lc health_check_period: 60 health_check_type: ELB min_size: 5 max_size: 5 desired_capacity: 5 region: us-east-1 new_instances_protected_from_scale_in: true | false ``` ##### EXPECTED RESULTS param to be taken into account",1, asg support newinstancesprotectedfromscalein parameter issue type feature idea component name asg ansible version ansible config file configured module search path default w o overrides summary see parameter newinstancesprotectedfromscalein is currently unsupported steps to reproduce asg name myasg launch config name my new lc health check period health check type elb min size max size desired capacity region us east new instances protected from scale in true false expected results param to be taken into account,1 1150,5008198798.0,IssuesEvent,2016-12-12 18:50:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,iam_policy not using role structure for policy_document path,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Component Name: iam_policy module ##### Ansible Version: ansible 2.0.1.0 ##### Ansible Configuration: no changes to ansible.cfg ##### Environment: control server Redhat 6.7 target server Redhat 6.7 ##### Summary: iam_policy policy_document parameter does not use role file structure ##### Steps To Reproduce: aws.yml ``` --- - hosts: localhost connection: local gather_facts: true roles: - aws ``` roles/aws/task/main.yml ``` --- - name: configure iam policy iam_policy: iam_name: test iam_type: group policy_name: test policy_document: test.json # policy_document: roles/aws/files/test.json region: us-east-1 state: present ``` roles/aws/files/test.json ``` --- ``` run play ansible-playbook aws.yml, task fails, no such file or directory test.json however, if you change the policy_document argument to: roles/aws/files/test.json and it works ##### Expected Results: Policy is created ##### Actual Results: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IOError: [Errno 2] No such file or directory: 'test.json' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""parsed"": false} ",True,"iam_policy not using role structure for policy_document path - ##### Issue Type: - Bug Report ##### Component Name: iam_policy module ##### Ansible Version: ansible 2.0.1.0 ##### Ansible Configuration: no changes to ansible.cfg ##### Environment: control server Redhat 6.7 target server Redhat 6.7 ##### Summary: iam_policy policy_document parameter does not use role file structure ##### Steps To Reproduce: aws.yml ``` --- - hosts: localhost connection: local gather_facts: true roles: - aws ``` roles/aws/task/main.yml ``` --- - name: configure iam policy iam_policy: iam_name: test iam_type: group policy_name: test policy_document: test.json # policy_document: roles/aws/files/test.json region: us-east-1 state: present ``` roles/aws/files/test.json ``` --- ``` run play ansible-playbook aws.yml, task fails, no such file or directory test.json however, if you change the policy_document argument to: roles/aws/files/test.json and it works ##### Expected Results: Policy is created ##### Actual Results: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IOError: [Errno 2] No such file or directory: 'test.json' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""parsed"": false} ",1,iam policy not using role structure for policy document path issue type bug report component name iam policy module ansible version ansible ansible configuration no changes to ansible cfg environment control server redhat target server redhat summary iam policy policy document parameter does not use role file structure steps to reproduce aws yml hosts localhost connection local gather facts true roles aws roles aws task main yml name configure iam policy iam policy iam name test iam type group policy name test policy document test json policy document roles aws files test json region us east state present roles aws files test json run play ansible playbook aws yml task fails no such file or directory test json however if you change the policy document argument to roles aws files test json and it works expected results policy is created actual results an exception occurred during task execution to see the full traceback use vvv the error was ioerror no such file or directory test json fatal failed changed false failed true parsed false ,1 784,4387511460.0,IssuesEvent,2016-08-08 15:59:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Synchronize: a problem with a non-standard ssh port,bug_report in progress waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME synchronize ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] vault_password_file=.vault_pass.txt ##### OS / ENVIRONMENT Ubuntu 16.04 ##### SUMMARY I have the following option in my /etc/ssh/ssh_config : ``` Port 2222 ``` I want to sync some directory to a machine that has this address: 10.0.3.188:22. I get an error ``` ssh: connect to host 10.0.3.188 port 2222: Connection refused ``` This [code](https://github.com/ansible/ansible-modules-core/blob/c52f475c64372042daab4ebc0660a2782b71d10d/files/synchronize.py#L414-L417) is responsible for this behavior. Why don't set port explicitly whatever its value is? ",True,"Synchronize: a problem with a non-standard ssh port - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME synchronize ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] vault_password_file=.vault_pass.txt ##### OS / ENVIRONMENT Ubuntu 16.04 ##### SUMMARY I have the following option in my /etc/ssh/ssh_config : ``` Port 2222 ``` I want to sync some directory to a machine that has this address: 10.0.3.188:22. I get an error ``` ssh: connect to host 10.0.3.188 port 2222: Connection refused ``` This [code](https://github.com/ansible/ansible-modules-core/blob/c52f475c64372042daab4ebc0660a2782b71d10d/files/synchronize.py#L414-L417) is responsible for this behavior. Why don't set port explicitly whatever its value is? ",1,synchronize a problem with a non standard ssh port issue type bug report component name synchronize ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables vault password file vault pass txt os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary i have the following option in my etc ssh ssh config port i want to sync some directory to a machine that has this address i get an error ssh connect to host port connection refused this is responsible for this behavior why don t set port explicitly whatever its value is ,1 788,4389635622.0,IssuesEvent,2016-08-08 22:54:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"BUG: Amazon ec2 instance is always created, unless instance_id is set",aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Component Name: ec2 module ##### Ansible Version: ``` ansible 2.0.1.0 ``` ##### Ansible Configuration: ##### Environment: Ubuntu 15.10 Wily Werewolf ##### Summary: Using the AWS ec2 config ""state: present"", will always create a new instance, unless ""instance_id"" is set. Instances should ONLY be created when an instance cannot be found with identical settings. ##### Steps To Reproduce: ``` - name: Ensure EC2 Instance exists ec2: key_name: KeyPair region: ""{{aws_region}}"" image: ""{{ami_id}}"" group_id: ""{{security_group_id}}"" instance_type: ""{{instance_type}}"" instance_tags: '{""Name"":""InstanceName"",""DeploymentType"":""Production"",""ServerType"":""Infrastructure""}' monitoring: yes vpc_subnet_id: ""{{ vpc_subnet_id }}"" wait: yes state: present ``` ##### Expected Results: When I run this script twice, no new EC2 instance should be created. ##### Actual Results: A new EC2 instance is created.",True,"BUG: Amazon ec2 instance is always created, unless instance_id is set - ##### Issue Type: - Bug Report ##### Component Name: ec2 module ##### Ansible Version: ``` ansible 2.0.1.0 ``` ##### Ansible Configuration: ##### Environment: Ubuntu 15.10 Wily Werewolf ##### Summary: Using the AWS ec2 config ""state: present"", will always create a new instance, unless ""instance_id"" is set. Instances should ONLY be created when an instance cannot be found with identical settings. ##### Steps To Reproduce: ``` - name: Ensure EC2 Instance exists ec2: key_name: KeyPair region: ""{{aws_region}}"" image: ""{{ami_id}}"" group_id: ""{{security_group_id}}"" instance_type: ""{{instance_type}}"" instance_tags: '{""Name"":""InstanceName"",""DeploymentType"":""Production"",""ServerType"":""Infrastructure""}' monitoring: yes vpc_subnet_id: ""{{ vpc_subnet_id }}"" wait: yes state: present ``` ##### Expected Results: When I run this script twice, no new EC2 instance should be created. ##### Actual Results: A new EC2 instance is created.",1,bug amazon instance is always created unless instance id is set issue type bug report component name module ansible version ansible ansible configuration environment ubuntu wily werewolf summary using the aws config state present will always create a new instance unless instance id is set instances should only be created when an instance cannot be found with identical settings steps to reproduce name ensure instance exists key name keypair region aws region image ami id group id security group id instance type instance type instance tags name instancename deploymenttype production servertype infrastructure monitoring yes vpc subnet id vpc subnet id wait yes state present expected results when i run this script twice no new instance should be created actual results a new instance is created ,1 1825,6577335360.0,IssuesEvent,2017-09-12 00:11:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,module junos_config with the parameter lines + delete: the delete details doesnt apply,affects_2.1 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config https://github.com/ansible/ansible-modules-core/blob/devel/network/junos/junos_config.py http://docs.ansible.com/ansible/junos_config_module.html ##### ANSIBLE VERSION ``` ksator@ubuntu:~/ansible-training-for-junos$ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ksator@ubuntu:~/ansible-training-for-junos$ ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT $ uname -a Linux ubuntu 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY can not delete junos configuration lines on junos devices using the module junos_config with the parameter lines + delete (the delete details doesnt apply). ##### STEPS TO REPRODUCE I used this playbook https://github.com/ksator/ansible-training-for-junos/blob/master/junos_config/playbook.yml (module junos_config, with the parameter lines, and some lines starting wuth delete). ansible-playbook junos_config/playbook.yml ``` --- - name: core module junos_config hosts: Chris-EX4200-test connection: local gather_facts: no vars_prompt: - name: DEVICE_PASSWORD prompt: Device password private: yes tasks: - name: test junos_config: host: ""{{ inventory_hostname }}"" username: pytraining password: ""{{ DEVICE_PASSWORD }}"" lines: - delete system name-server 172.30.179.2 - set system host-name newname ``` ##### EXPECTED RESULTS The expected result with module junos_config with the parameter lines with lines starting with delete is: this module should remove these junos lines on remote devices. with the playbook above, this ""system name-server 172.30.179.2"" should be removed from the junos configuration of the host. ##### ACTUAL RESULTS the playbook did not remove ""system name-server 172.30.179.2"" on the host. however the details of the set command (set system host-name newname) has been added succesfully. ``` I ran this playbook (https://github.com/ksator/ansible-training-for-junos/blob/master/junos_config/playbook.yml). with extra verbosity (-vvvv). and I enabled netconf log on device (http://www.juniper.net/documentation/en_US/junos14.2/topics/topic-map/netconf-traceoptions.html). here's the output log. ksator@ubuntu:~/ansible-training-for-junos$ ansible-playbook junos_config/playbook.yml -i hosts -vvvv Using /etc/ansible/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: playbook.yml ********************************************************* 1 plays in junos_config/playbook.yml Device password: PLAY [core module junos_config] ************************************************ TASK [test] ******************************************************************** task path: /home/ksator/ansible-training-for-junos/junos_config/playbook.yml:13 <172.30.179.113> ESTABLISH LOCAL CONNECTION FOR USER: ksator <172.30.179.113> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829 `"" && echo ansible-tmp-1466153012.52-229272728614829=""` echo $HOME/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829 `"" ) && sleep 0' <172.30.179.113> PUT /tmp/tmptlRDup TO /home/ksator/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829/junos_config <172.30.179.113> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/ksator/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829/junos_config; rm -rf ""/home/ksator/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829/"" > /dev/null 2>&1 && sleep 0' changed: [172.30.179.113] => {""changed"": true, ""diff"": {""prepared"": ""\n[edit system]\n- host-name ex4200-13;\n+ host-name newname;\n""}, ""invocation"": {""module_args"": {""comment"": ""update config"", ""confirm"": 1, ""host"": ""172.30.179.113"", ""lines"": [""delete system name-server 172.30.179.2"", ""set system host-name newname""], ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""replace"": false, ""rollback"": null, ""ssh_keyfile"": null, ""timeout"": 0, ""transport"": ""netconf"", ""username"": ""pytraining"", ""zeroize"": false}, ""module_name"": ""junos_config""}} PLAY RECAP ********************************************************************* 172.30.179.113 : ok=1 changed=1 unreachable=0 failed=0 ksator@ubuntu:~/ansible-training-for-junos$ netconf logs on junos device pytraining@ex4200-13> show log netconf-ops.log Jun 17 10:43:35 Started tracing session: 48789 Jun 17 10:43:35 [48789] Incoming: urn:ietf:params:netconf:capability:writable-running:1.0urn:ietf:params:netconf:capability:rollback-on-error:1.0urn:liberouter:params:netconf:capability:power-control:1.0urn:ietf:params:netconf:capability:validate:1.0urn:i Jun 17 10:43:36 [48789] Incoming: ]]>]]> Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 14.1X53-D25.2 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: access Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <ge-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: access Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <xe-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: access Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: trunk Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <ge-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: trunk Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: all Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <xe-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: trunk Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: all Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ex4200-13 Jun 17 10:43:36 [48789] Outgoing: poc-nl.jnpr.net Jun 17 10:43:36 [48789] Outgoing: Europe/Amsterdam Jun 17 10:43:36 [48789] Outgoing: radius Jun 17 10:43:36 [48789] Outgoing: password Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: $1$/NHg28eO$pqaVlLlPQ2thlQQ0ZB.Vx/ Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.2 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.176.9 Jun 17 10:43:36 [48789] Outgoing: $9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/ Jun 17 10:43:36 [48789] Outgoing: 3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.176.4 Jun 17 10:43:36 [48789] Outgoing: $9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre Jun 17 10:43:36 [48789] Outgoing: 3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: pytraining-13 Jun 17 10:43:36 [48789] Outgoing: 2001 Jun 17 10:43:36 [48789] Outgoing: super-user Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: $1$uH0eoaXS$tTd6k7k.AVkEDhdJ8V75F. Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: remote Jun 17 10:43:36 [48789] Outgoing: 2000 Jun 17 10:43:36 [48789] Outgoing: super-user Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 120 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: netconf-ops.log Jun 17 10:43:36 [48789] Outgoing: 3m Jun 17 10:43:36 [48789] Outgoing: 20 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: all Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: * Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.189.13 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: authorization Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: interactive-commands Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.189.14 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: authorization Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: interactive-commands Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: messages Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: authorization Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.2 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ge-0/0/0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 10.0.0.2/24 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: lo0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 192.179.0.113/32 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 127.0.0.1/32 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 49.0179.0000.0000.0113.00 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: me0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 172.30.179.113/24 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: AMS, EPOC 3.18 Jun 17 10:43:36 [48789] Outgoing: emea-poc@juniper.net Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: public Jun 17 10:43:36 [48789] Outgoing: read-only Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.0.0/16 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0.0.0.0/0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: private Jun 17 10:43:36 [48789] Outgoing: read-write Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.0.0/16 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0.0.0.0/0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
172.30.179.113
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0.0.0.0/0 Jun 17 10:43:36 [48789] Outgoing: 172.30.179.1 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 192.179.0.113 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 100 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: test1 Jun 17 10:43:36 [48789] Outgoing: 1111 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: test2 Jun 17 10:43:36 [48789] Outgoing: 1112 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: test3 Jun 17 10:43:36 [48789] Outgoing: 1113 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: ]]>]]> Jun 17 10:43:36 [48789] Incoming: ]]>]]> Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ]]>]]> Jun 17 10:43:36 [48789] Incoming: set system host-name newname]]>]]> Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ]]>]]> Jun 17 10:43:37 [48789] Incoming: ]]>]]> Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: [edit system] Jun 17 10:43:37 [48789] Outgoing: - host-name ex4200-13; Jun 17 10:43:37 [48789] Outgoing: + host-name newname; Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: ]]>]]> Jun 17 10:43:37 [48789] Incoming: ]]>]]> Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: fpc0 Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: ]]>]]> Jun 17 10:43:39 [48789] Incoming: 1update config]]>]]> Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:41 [48789] Outgoing: Jun 17 10:43:41 [48789] Outgoing: fpc0 Jun 17 10:43:41 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: ]]>]]> Jun 17 10:43:44 [48789] Incoming: ]]>]]> Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:46 [48789] Outgoing: ]]>]]> Jun 17 10:43:46 [48789] Outgoing: ``` ",True,"module junos_config with the parameter lines + delete: the delete details doesnt apply - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config https://github.com/ansible/ansible-modules-core/blob/devel/network/junos/junos_config.py http://docs.ansible.com/ansible/junos_config_module.html ##### ANSIBLE VERSION ``` ksator@ubuntu:~/ansible-training-for-junos$ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ksator@ubuntu:~/ansible-training-for-junos$ ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT $ uname -a Linux ubuntu 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY can not delete junos configuration lines on junos devices using the module junos_config with the parameter lines + delete (the delete details doesnt apply). ##### STEPS TO REPRODUCE I used this playbook https://github.com/ksator/ansible-training-for-junos/blob/master/junos_config/playbook.yml (module junos_config, with the parameter lines, and some lines starting wuth delete). ansible-playbook junos_config/playbook.yml ``` --- - name: core module junos_config hosts: Chris-EX4200-test connection: local gather_facts: no vars_prompt: - name: DEVICE_PASSWORD prompt: Device password private: yes tasks: - name: test junos_config: host: ""{{ inventory_hostname }}"" username: pytraining password: ""{{ DEVICE_PASSWORD }}"" lines: - delete system name-server 172.30.179.2 - set system host-name newname ``` ##### EXPECTED RESULTS The expected result with module junos_config with the parameter lines with lines starting with delete is: this module should remove these junos lines on remote devices. with the playbook above, this ""system name-server 172.30.179.2"" should be removed from the junos configuration of the host. ##### ACTUAL RESULTS the playbook did not remove ""system name-server 172.30.179.2"" on the host. however the details of the set command (set system host-name newname) has been added succesfully. ``` I ran this playbook (https://github.com/ksator/ansible-training-for-junos/blob/master/junos_config/playbook.yml). with extra verbosity (-vvvv). and I enabled netconf log on device (http://www.juniper.net/documentation/en_US/junos14.2/topics/topic-map/netconf-traceoptions.html). here's the output log. ksator@ubuntu:~/ansible-training-for-junos$ ansible-playbook junos_config/playbook.yml -i hosts -vvvv Using /etc/ansible/ansible.cfg as config file Loaded callback default of type stdout, v2.0 PLAYBOOK: playbook.yml ********************************************************* 1 plays in junos_config/playbook.yml Device password: PLAY [core module junos_config] ************************************************ TASK [test] ******************************************************************** task path: /home/ksator/ansible-training-for-junos/junos_config/playbook.yml:13 <172.30.179.113> ESTABLISH LOCAL CONNECTION FOR USER: ksator <172.30.179.113> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829 `"" && echo ansible-tmp-1466153012.52-229272728614829=""` echo $HOME/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829 `"" ) && sleep 0' <172.30.179.113> PUT /tmp/tmptlRDup TO /home/ksator/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829/junos_config <172.30.179.113> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/ksator/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829/junos_config; rm -rf ""/home/ksator/.ansible/tmp/ansible-tmp-1466153012.52-229272728614829/"" > /dev/null 2>&1 && sleep 0' changed: [172.30.179.113] => {""changed"": true, ""diff"": {""prepared"": ""\n[edit system]\n- host-name ex4200-13;\n+ host-name newname;\n""}, ""invocation"": {""module_args"": {""comment"": ""update config"", ""confirm"": 1, ""host"": ""172.30.179.113"", ""lines"": [""delete system name-server 172.30.179.2"", ""set system host-name newname""], ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""replace"": false, ""rollback"": null, ""ssh_keyfile"": null, ""timeout"": 0, ""transport"": ""netconf"", ""username"": ""pytraining"", ""zeroize"": false}, ""module_name"": ""junos_config""}} PLAY RECAP ********************************************************************* 172.30.179.113 : ok=1 changed=1 unreachable=0 failed=0 ksator@ubuntu:~/ansible-training-for-junos$ netconf logs on junos device pytraining@ex4200-13> show log netconf-ops.log Jun 17 10:43:35 Started tracing session: 48789 Jun 17 10:43:35 [48789] Incoming: urn:ietf:params:netconf:capability:writable-running:1.0urn:ietf:params:netconf:capability:rollback-on-error:1.0urn:liberouter:params:netconf:capability:power-control:1.0urn:ietf:params:netconf:capability:validate:1.0urn:i Jun 17 10:43:36 [48789] Incoming: ]]>]]> Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 14.1X53-D25.2 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: access Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <ge-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: access Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <xe-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: access Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: trunk Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <ge-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: trunk Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: all Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: <xe-*> Jun 17 10:43:36 [48789] Outgoing: 4484 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: trunk Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: all Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ex4200-13 Jun 17 10:43:36 [48789] Outgoing: poc-nl.jnpr.net Jun 17 10:43:36 [48789] Outgoing: Europe/Amsterdam Jun 17 10:43:36 [48789] Outgoing: radius Jun 17 10:43:36 [48789] Outgoing: password Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: $1$/NHg28eO$pqaVlLlPQ2thlQQ0ZB.Vx/ Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.2 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.176.9 Jun 17 10:43:36 [48789] Outgoing: $9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/ Jun 17 10:43:36 [48789] Outgoing: 3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.176.4 Jun 17 10:43:36 [48789] Outgoing: $9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre Jun 17 10:43:36 [48789] Outgoing: 3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: pytraining-13 Jun 17 10:43:36 [48789] Outgoing: 2001 Jun 17 10:43:36 [48789] Outgoing: super-user Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: $1$uH0eoaXS$tTd6k7k.AVkEDhdJ8V75F. Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: remote Jun 17 10:43:36 [48789] Outgoing: 2000 Jun 17 10:43:36 [48789] Outgoing: super-user Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 120 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: netconf-ops.log Jun 17 10:43:36 [48789] Outgoing: 3m Jun 17 10:43:36 [48789] Outgoing: 20 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: all Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: * Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.189.13 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: authorization Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: interactive-commands Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.189.14 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: authorization Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: interactive-commands Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: messages Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: any Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: authorization Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.3 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.179.2 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ge-0/0/0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 10.0.0.2/24 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: lo0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 192.179.0.113/32 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 127.0.0.1/32 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 49.0179.0000.0000.0113.00 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: me0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: 172.30.179.113/24 Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: AMS, EPOC 3.18 Jun 17 10:43:36 [48789] Outgoing: emea-poc@juniper.net Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: public Jun 17 10:43:36 [48789] Outgoing: read-only Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.0.0/16 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0.0.0.0/0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: private Jun 17 10:43:36 [48789] Outgoing: read-write Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 172.30.0.0/16 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0.0.0.0/0 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
172.30.179.113
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 0.0.0.0/0 Jun 17 10:43:36 [48789] Outgoing: 172.30.179.1 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 192.179.0.113 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: 100 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: test1 Jun 17 10:43:36 [48789] Outgoing: 1111 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: test2 Jun 17 10:43:36 [48789] Outgoing: 1112 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: test3 Jun 17 10:43:36 [48789] Outgoing: 1113 Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing:
Jun 17 10:43:36 [48789] Outgoing: ]]>]]> Jun 17 10:43:36 [48789] Incoming: ]]>]]> Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ]]>]]> Jun 17 10:43:36 [48789] Incoming: set system host-name newname]]>]]> Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: Jun 17 10:43:36 [48789] Outgoing: ]]>]]> Jun 17 10:43:37 [48789] Incoming: ]]>]]> Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: [edit system] Jun 17 10:43:37 [48789] Outgoing: - host-name ex4200-13; Jun 17 10:43:37 [48789] Outgoing: + host-name newname; Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: ]]>]]> Jun 17 10:43:37 [48789] Incoming: ]]>]]> Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:37 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: fpc0 Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: ]]>]]> Jun 17 10:43:39 [48789] Incoming: 1update config]]>]]> Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:39 [48789] Outgoing: Jun 17 10:43:41 [48789] Outgoing: Jun 17 10:43:41 [48789] Outgoing: fpc0 Jun 17 10:43:41 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: ]]>]]> Jun 17 10:43:44 [48789] Incoming: ]]>]]> Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:44 [48789] Outgoing: Jun 17 10:43:46 [48789] Outgoing: ]]>]]> Jun 17 10:43:46 [48789] Outgoing: ``` ",1,module junos config with the parameter lines delete the delete details doesnt apply issue type bug report component name junos config ansible version ksator ubuntu ansible training for junos ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides ksator ubuntu ansible training for junos configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific uname a linux ubuntu generic ubuntu smp fri jul utc gnu linux summary can not delete junos configuration lines on junos devices using the module junos config with the parameter lines delete the delete details doesnt apply steps to reproduce i used this playbook module junos config with the parameter lines and some lines starting wuth delete ansible playbook junos config playbook yml name core module junos config hosts chris test connection local gather facts no vars prompt name device password prompt device password private yes tasks name test junos config host inventory hostname username pytraining password device password lines delete system name server set system host name newname expected results the expected result with module junos config with the parameter lines with lines starting with delete is this module should remove these junos lines on remote devices with the playbook above this system name server should be removed from the junos configuration of the host actual results the playbook did not remove system name server on the host however the details of the set command set system host name newname has been added succesfully i ran this playbook with extra verbosity vvvv and i enabled netconf log on device here s the output log ksator ubuntu ansible training for junos ansible playbook junos config playbook yml i hosts vvvv using etc ansible ansible cfg as config file loaded callback default of type stdout playbook playbook yml plays in junos config playbook yml device password play task task path home ksator ansible training for junos junos config playbook yml establish local connection for user ksator exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmptlrdup to home ksator ansible tmp ansible tmp junos config exec bin sh c lang c lc all c lc messages c usr bin python home ksator ansible tmp ansible tmp junos config rm rf home ksator ansible tmp ansible tmp dev null sleep changed changed true diff prepared n n host name n host name newname n invocation module args comment update config confirm host lines password value specified in no log parameter port null provider null replace false rollback null ssh keyfile null timeout transport netconf username pytraining zeroize false module name junos config play recap ok changed unreachable failed ksator ubuntu ansible training for junos netconf logs on junos device pytraining show log netconf ops log jun started tracing session jun incoming urn ietf params netconf capability writable running urn ietf params netconf capability rollback on error urn liberouter params netconf capability power control urn ietf params netconf capability validate urn i jun incoming jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing access jun outgoing jun outgoing jun outgoing lt ge gt jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing access jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing lt xe gt jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing access jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing trunk jun outgoing jun outgoing jun outgoing lt ge gt jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing trunk jun outgoing jun outgoing all jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing lt xe gt jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing trunk jun outgoing jun outgoing all jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing poc nl jnpr net jun outgoing europe amsterdam jun outgoing radius jun outgoing password jun outgoing jun outgoing vx jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing pytraining jun outgoing jun outgoing super user jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing remote jun outgoing jun outgoing super user jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing netconf ops log jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing all jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing any jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing any jun outgoing jun outgoing jun outgoing jun outgoing authorization jun outgoing jun outgoing jun outgoing jun outgoing interactive commands jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing any jun outgoing jun outgoing jun outgoing jun outgoing authorization jun outgoing jun outgoing jun outgoing jun outgoing interactive commands jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing messages jun outgoing jun outgoing any jun outgoing jun outgoing jun outgoing jun outgoing authorization jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing ge jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing ams epoc jun outgoing emea poc juniper net jun outgoing jun outgoing public jun outgoing read only jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing private jun outgoing read write jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun incoming jun outgoing jun outgoing jun outgoing jun incoming set system host name newname jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun incoming jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing host name jun outgoing host name newname jun outgoing jun outgoing jun outgoing jun outgoing jun incoming jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun incoming update config jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun outgoing jun incoming jun outgoing jun outgoing jun outgoing jun outgoing ,1 1845,6577385115.0,IssuesEvent,2017-09-12 00:32:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Yum module should support disableexcludes option,affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### SUMMARY Core module yum, should support more yum options. Disabling excluded packages in repository (--disableexcludes) is a handy one. ",True,"Yum module should support disableexcludes option - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### SUMMARY Core module yum, should support more yum options. Disabling excluded packages in repository (--disableexcludes) is a handy one. ",1,yum module should support disableexcludes option issue type feature idea component name yum module ansible version ansible summary core module yum should support more yum options disabling excluded packages in repository disableexcludes is a handy one ,1 1742,6574902975.0,IssuesEvent,2017-09-11 14:26:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"docker_container reports ""Error: docker-py version is 1.10.4. Minimum version required is 1.7.0.""",affects_2.1 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Changing from deprecated docker module to docker_container. Get a bad check on docker-py ##### STEPS TO REPRODUCE Install a docker-py > 1.7 and use docker_container ##### EXPECTED RESULTS Works fine with docker module.. but it is deprecated ##### ACTUAL RESULTS ``` TASK [Start rancher-agent] ***************************************************** fatal: [10.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.4. Minimum version required is 1.7.0.""} fatal: [10.0.0.2]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.4. Minimum version required is 1.7.0.""} NO MORE HOSTS LEFT ************************************************************* ``` ",True,"docker_container reports ""Error: docker-py version is 1.10.4. Minimum version required is 1.7.0."" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Changing from deprecated docker module to docker_container. Get a bad check on docker-py ##### STEPS TO REPRODUCE Install a docker-py > 1.7 and use docker_container ##### EXPECTED RESULTS Works fine with docker module.. but it is deprecated ##### ACTUAL RESULTS ``` TASK [Start rancher-agent] ***************************************************** fatal: [10.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.4. Minimum version required is 1.7.0.""} fatal: [10.0.0.2]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.4. Minimum version required is 1.7.0.""} NO MORE HOSTS LEFT ************************************************************* ``` ",1,docker container reports error docker py version is minimum version required is issue type bug report component name docker container ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment n a summary changing from deprecated docker module to docker container get a bad check on docker py steps to reproduce install a docker py and use docker container expected results works fine with docker module but it is deprecated actual results task fatal failed changed false failed true msg error docker py version is minimum version required is fatal failed changed false failed true msg error docker py version is minimum version required is no more hosts left ,1 965,4707894644.0,IssuesEvent,2016-10-13 21:31:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Mutiple inputs to vsphere_guest module are silently ignored when launching from a template,affects_1.9 bug_report cloud vmware waiting_on_maintainer,"[![Bountysource](https://www.bountysource.com/badge/issue?issue_id=16226901)](https://www.bountysource.com/issues/16226901-mutiple-inputs-to-vsphere_guest-module-are-silently-ignored-when-launching-from-a-template?utm_source=16226901&utm_medium=shield&utm_campaign=ISSUE_BADGE) #### Issue Type: Bug Report #### Ansible Version: 1.9.1 #### Ansible Configuration: n/a #### Environment: OS X 10.10.3, vCenter 6 #### Summary: Mutiple inputs to vsphere_guest module are ignored when launching from a template. Does not throw errors for invalid data either. vm_nic, vm_disk & vm_hardware parameters are all ignored. #### Steps To Reproduce: This command creates VM with but ignores valid options. Invalid options are ignored as well. ``` - name: Create other-app VM for testing connection: local vsphere_guest: vcenter_hostname: 110.120.113.333 username: foo password: bar guest: other-app-{{ 9 |random}}{{ 9 |random}}{{ 9 |random}}{{ 9 |random}}{{ 9 |random}} vm_disk: disk1: size_gb: 32 type: thin datastore: foo_VMFS_1 vm_hardware: memory_mb: 8192 num_cpus: 2 osid: ubuntu64Guest scsi: lsi vm_nic: nic1: type: vmxnet3 network: slartibartfast network_type: dvs vm_extra_config: notes: this is a test VM cluster: Ontrack-Cluster1 resource_pool: /Resources/foo-default from_template: yes template_src: ubuntu-14.04-template esxi: datacenter: foo-DC ``` #### Expected Results: VM launched with specific NIC, disk and hardware settings. Invalid settings (such as mis-spelt NIC networks should produce an error). #### Actual Results: VM is launched but network, CPU and RAM that was originally specified in the template is retained. ``` REMOTE_MODULE vsphere_guest template_src=ubuntu-14.04-template vcenter_hostname=110.120.113.333 cluster=Ontrack-Cluster1 guest=other-app-03770 password=VALUE_HIDDEN resource_pool=/Resources/foo-default username=foo EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913 && echo $HOME/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913'] PUT /var/folders/s0/jy7xj1915_bg41nmvc3cjcdw0000gn/T/tmpDWPVuq TO /Users/tpai/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913/vsphere_guest EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/tpai/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913/vsphere_guest; rm -rf /Users/tpai/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913/ >/dev/null 2>&1'] changed: [localhost] => {""changed"": true} ```",True,"Mutiple inputs to vsphere_guest module are silently ignored when launching from a template - [![Bountysource](https://www.bountysource.com/badge/issue?issue_id=16226901)](https://www.bountysource.com/issues/16226901-mutiple-inputs-to-vsphere_guest-module-are-silently-ignored-when-launching-from-a-template?utm_source=16226901&utm_medium=shield&utm_campaign=ISSUE_BADGE) #### Issue Type: Bug Report #### Ansible Version: 1.9.1 #### Ansible Configuration: n/a #### Environment: OS X 10.10.3, vCenter 6 #### Summary: Mutiple inputs to vsphere_guest module are ignored when launching from a template. Does not throw errors for invalid data either. vm_nic, vm_disk & vm_hardware parameters are all ignored. #### Steps To Reproduce: This command creates VM with but ignores valid options. Invalid options are ignored as well. ``` - name: Create other-app VM for testing connection: local vsphere_guest: vcenter_hostname: 110.120.113.333 username: foo password: bar guest: other-app-{{ 9 |random}}{{ 9 |random}}{{ 9 |random}}{{ 9 |random}}{{ 9 |random}} vm_disk: disk1: size_gb: 32 type: thin datastore: foo_VMFS_1 vm_hardware: memory_mb: 8192 num_cpus: 2 osid: ubuntu64Guest scsi: lsi vm_nic: nic1: type: vmxnet3 network: slartibartfast network_type: dvs vm_extra_config: notes: this is a test VM cluster: Ontrack-Cluster1 resource_pool: /Resources/foo-default from_template: yes template_src: ubuntu-14.04-template esxi: datacenter: foo-DC ``` #### Expected Results: VM launched with specific NIC, disk and hardware settings. Invalid settings (such as mis-spelt NIC networks should produce an error). #### Actual Results: VM is launched but network, CPU and RAM that was originally specified in the template is retained. ``` REMOTE_MODULE vsphere_guest template_src=ubuntu-14.04-template vcenter_hostname=110.120.113.333 cluster=Ontrack-Cluster1 guest=other-app-03770 password=VALUE_HIDDEN resource_pool=/Resources/foo-default username=foo EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913 && echo $HOME/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913'] PUT /var/folders/s0/jy7xj1915_bg41nmvc3cjcdw0000gn/T/tmpDWPVuq TO /Users/tpai/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913/vsphere_guest EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/tpai/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913/vsphere_guest; rm -rf /Users/tpai/.ansible/tmp/ansible-tmp-1432138365.27-28886126071913/ >/dev/null 2>&1'] changed: [localhost] => {""changed"": true} ```",1,mutiple inputs to vsphere guest module are silently ignored when launching from a template issue type bug report ansible version ansible configuration n a environment os x vcenter summary mutiple inputs to vsphere guest module are ignored when launching from a template does not throw errors for invalid data either vm nic vm disk vm hardware parameters are all ignored steps to reproduce this command creates vm with but ignores valid options invalid options are ignored as well name create other app vm for testing connection local vsphere guest vcenter hostname username foo password bar guest other app random random random random random vm disk size gb type thin datastore foo vmfs vm hardware memory mb num cpus osid scsi lsi vm nic type network slartibartfast network type dvs vm extra config notes this is a test vm cluster ontrack resource pool resources foo default from template yes template src ubuntu template esxi datacenter foo dc expected results vm launched with specific nic disk and hardware settings invalid settings such as mis spelt nic networks should produce an error actual results vm is launched but network cpu and ram that was originally specified in the template is retained remote module vsphere guest template src ubuntu template vcenter hostname cluster ontrack guest other app password value hidden resource pool resources foo default username foo exec put var folders t tmpdwpvuq to users tpai ansible tmp ansible tmp vsphere guest exec changed changed true ,1 1722,6574505065.0,IssuesEvent,2017-09-11 13:08:23,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service not working as expected on 2.2.0,affects_2.2 bug_report cloud docker waiting_on_maintainer,"Hi, Sorry if I'm making some sort of mistake, I can't find any reference to this issue and don't konw how to fix it. I'm using the same configuration that works with ansible 2.1.0, but it fails with ansible 2.2.0 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no specific config ##### OS / ENVIRONMENT running from multiple OS (linux and mac) but managing an ubuntu machine (16.04 LTS xenial) ##### SUMMARY On executing docker_service, I get an error running the docker services that seems to be related to Dockerfile, although I have specified not to build the containers. Not sure if I need to explicitly add another parameter or not. ##### STEPS TO REPRODUCE my playbook: ``` - debug: msg=""restarting services"" - docker_service: project_src: /apps/liveheats state: present build: no files: - docker-compose.yml - docker-compose.production.yml ``` ##### EXPECTED RESULTS With 2.1.0.0 it would start the docker services correctly ##### ACTUAL RESULTS ``` fatal: [X.X.X.X]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": null, ""dependencies"": true, ""docker_host"": null, ""files"": [ ""docker-compose.yml"", ""docker-compose.production.yml"" ], ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": null, ""project_src"": ""/apps/liveheats"", ""pull"": false, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": false, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 10, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Error starting project - 500 Server Error: Internal Server Error (\""Cannot locate specified Dockerfile: Dockerfile\"")"" } ``` ",True,"docker_service not working as expected on 2.2.0 - Hi, Sorry if I'm making some sort of mistake, I can't find any reference to this issue and don't konw how to fix it. I'm using the same configuration that works with ansible 2.1.0, but it fails with ansible 2.2.0 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no specific config ##### OS / ENVIRONMENT running from multiple OS (linux and mac) but managing an ubuntu machine (16.04 LTS xenial) ##### SUMMARY On executing docker_service, I get an error running the docker services that seems to be related to Dockerfile, although I have specified not to build the containers. Not sure if I need to explicitly add another parameter or not. ##### STEPS TO REPRODUCE my playbook: ``` - debug: msg=""restarting services"" - docker_service: project_src: /apps/liveheats state: present build: no files: - docker-compose.yml - docker-compose.production.yml ``` ##### EXPECTED RESULTS With 2.1.0.0 it would start the docker services correctly ##### ACTUAL RESULTS ``` fatal: [X.X.X.X]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": null, ""dependencies"": true, ""docker_host"": null, ""files"": [ ""docker-compose.yml"", ""docker-compose.production.yml"" ], ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": null, ""project_src"": ""/apps/liveheats"", ""pull"": false, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": false, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 10, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Error starting project - 500 Server Error: Internal Server Error (\""Cannot locate specified Dockerfile: Dockerfile\"")"" } ``` ",1,docker service not working as expected on hi sorry if i m making some sort of mistake i can t find any reference to this issue and don t konw how to fix it i m using the same configuration that works with ansible but it fails with ansible issue type bug report component name docker service ansible version ansible version ansible config file configured module search path default w o overrides configuration no specific config os environment running from multiple os linux and mac but managing an ubuntu machine lts xenial summary on executing docker service i get an error running the docker services that seems to be related to dockerfile although i have specified not to build the containers not sure if i need to explicitly add another parameter or not steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used my playbook debug msg restarting services docker service project src apps liveheats state present build no files docker compose yml docker compose production yml expected results with it would start the docker services correctly actual results fatal failed changed false failed true invocation module args api version null build false cacert path null cert path null debug false definition null dependencies true docker host null files docker compose yml docker compose production yml filter logger false hostname check false key path null nocache false project name null project src apps liveheats pull false recreate smart remove images null remove orphans false remove volumes false restarted false scale null services null ssl version null state present stopped false timeout tls null tls hostname null tls verify null module name docker service msg error starting project server error internal server error cannot locate specified dockerfile dockerfile ,1 1369,5923466011.0,IssuesEvent,2017-05-23 08:01:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ansible connection is sometimes dropped mid-play on windows hosts,affects_2.1 bug_report waiting_on_maintainer windows," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_user.ps1 ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT I am using the Amazon created flavor of linux. ami id: ami-f5f41398 ##### SUMMARY For the win_user task, ansible fails to reach the remote host, despite already making a connection and performing tasks. ##### STEPS TO REPRODUCE Can't seem to reproduce this accurately, and it's unfortunately not tied to one specific module (I just so happen to be running the win_user module. ``` TASK [create_users : Create noreply user] ************************************** task path: /home/ec2-user/Ansible/roles/create_users/tasks/main.yml:20 Traceback (most recent call last): File ""/usr/local/lib/python2.7/site-packages/ansible/plugins/connection/winrm.py"", line 271, in exec_command result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], from_exec=True) File ""/usr/local/lib/python2.7/site-packages/ansible/plugins/connection/winrm.py"", line 233, in _winrm_exec self.protocol.cleanup_command(self.shell_id, command_id) File ""/usr/local/lib/python2.7/site-packages/winrm/protocol.py"", line 307, in cleanup_command res = self.send_message(xmltodict.unparse(req)) File ""/usr/local/lib/python2.7/site-packages/winrm/protocol.py"", line 207, in send_message return self.transport.send_message(message) File ""/usr/local/lib/python2.7/site-packages/winrm/transport.py"", line 173, in send_message response = self.session.send(prepared_request, timeout=self.read_timeout_sec) File ""/usr/local/lib/python2.7/site-packages/requests/sessions.py"", line 596, in send r = adapter.send(request, **kwargs) File ""/usr/local/lib/python2.7/site-packages/requests/adapters.py"", line 479, in send raise ConnectTimeout(e, request=request) ConnectTimeout: HTTPSConnectionPool(host='172.16.169.179', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(, 'Connection to 172.16.169.179 timed out. (connect timeout=30)')) fatal: [172.16.169.179]: UNREACHABLE! => {""changed"": false, ""msg"": ""failed to exec cmd PowerShell -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -EncodedCommand UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgBUAHIAeQAKAHsACgAmACAAJwBDADoAXABVAHMAZQByAHMAXABhAG4AcwBpAGIAbABlAFwAQQBwAHAARABhAHQAYQBcAEwAbwBjAGEAbABcAFQAZQBtAHAAXABhAG4AcwBpAGIAbABlAC0AdABtAHAALQAxADQANwA3ADMAMQA2ADYAMAAwAC4ANwA3AC0AMQA2ADMAMgAxADgANgA5ADAAOQA2ADUAOQA4ADYAXAB3AGkAbgBfAHUAcwBlAHIALgBwAHMAMQAnAAoAfQAKAEMAYQB0AGMAaAAKAHsACgAkAF8AbwBiAGoAIAA9ACAAQAB7ACAAZgBhAGkAbABlAGQAIAA9ACAAJAB0AHIAdQBlACAAfQAKAEkAZgAgACgAJABfAC4ARQB4AGMAZQBwAHQAaQBvAG4ALgBHAGUAdABUAHkAcABlACkACgB7AAoAJABfAG8AYgBqAC4AQQBkAGQAKAAnAG0AcwBnACcALAAgACQAXwAuAEUAeABjAGUAcAB0AGkAbwBuAC4ATQBlAHMAcwBhAGcAZQApAAoAfQAKAEUAbABzAGUACgB7AAoAJABfAG8AYgBqAC4AQQBkAGQAKAAnAG0AcwBnACcALAAgACQAXwAuAFQAbwBTAHQAcgBpAG4AZwAoACkAKQAKAH0ACgBJAGYAIAAoACQAXwAuAEkAbgB2AG8AYwBhAHQAaQBvAG4ASQBuAGYAbwAuAFAAbwBzAGkAdABpAG8AbgBNAGUAcwBzAGEAZwBlACkACgB7AAoAJABfAG8AYgBqAC4AQQBkAGQAKAAnAGUAeABjAGUAcAB0AGkAbwBuACcALAAgACQAXwAuAEkAbgB2AG8AYwBhAHQAaQBvAG4ASQBuAGYAbwAuAFAAbwBzAGkAdABpAG8AbgBNAGUAcwBzAGEAZwBlACkACgB9AAoARQBsAHMAZQBJAGYAIAAoACQAXwAuAFMAYwByAGkAcAB0AFMAdABhAGMAawBUAHIAYQBjAGUAKQAKAHsACgAkAF8AbwBiAGoALgBBAGQAZAAoACcAZQB4AGMAZQBwAHQAaQBvAG4AJwAsACAAJABfAC4AUwBjAHIAaQBwAHQAUwB0AGEAYwBrAFQAcgBhAGMAZQApAAoAfQAKAFQAcgB5AAoAewAKACQAXwBvAGIAagAuAEEAZABkACgAJwBlAHIAcgBvAHIAXwByAGUAYwBvAHIAZAAnACwAIAAoACQAXwAgAHwAIABDAG8AbgB2AGUAcgB0AFQAbwAtAEoAcwBvAG4AIAB8ACAAQwBvAG4AdgBlAHIAdABGAHIAbwBtAC0ASgBzAG8AbgApACkACgB9AAoAQwBhAHQAYwBoAAoAewAKAH0ACgBFAGMAaABvACAAJABfAG8AYgBqACAAfAAgAEMAbwBuAHYAZQByAHQAVABvAC0ASgBzAG8AbgAgAC0AQwBvAG0AcAByAGUAcwBzACAALQBEAGUAcAB0AGgAIAA5ADkACgBFAHgAaQB0ACAAMQAKAH0ACgBGAGkAbgBhAGwAbAB5ACAAewAgAFIAZQBtAG8AdgBlAC0ASQB0AGUAbQAgACIAQwA6AFwAVQBzAGUAcgBzAFwAYQBuAHMAaQBiAGwAZQBcAEEAcABwAEQAYQB0AGEAXABMAG8AYwBhAGwAXABUAGUAbQBwAFwAYQBuAHMAaQBiAGwAZQAtAHQAbQBwAC0AMQA0ADcANwAzADEANgA2ADAAMAAuADcANwAtADEANgAzADIAMQA4ADYAOQAwADkANgA1ADkAOAA2ACIAIAAtAEYAbwByAGMAZQAgAC0AUgBlAGMAdQByAHMAZQAgAC0ARQByAHIAbwByAEEAYwB0AGkAbwBuACAAUwBpAGwAZQBuAHQAbAB5AEMAbwBuAHQAaQBuAHUAZQAgAH0A"", ""unreachable"": true} ``` ##### EXPECTED RESULTS Playbook continues run. ##### ACTUAL RESULTS Ansible connection is severed mid-play. ``` ``` ",True,"Ansible connection is sometimes dropped mid-play on windows hosts - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_user.ps1 ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT I am using the Amazon created flavor of linux. ami id: ami-f5f41398 ##### SUMMARY For the win_user task, ansible fails to reach the remote host, despite already making a connection and performing tasks. ##### STEPS TO REPRODUCE Can't seem to reproduce this accurately, and it's unfortunately not tied to one specific module (I just so happen to be running the win_user module. ``` TASK [create_users : Create noreply user] ************************************** task path: /home/ec2-user/Ansible/roles/create_users/tasks/main.yml:20 Traceback (most recent call last): File ""/usr/local/lib/python2.7/site-packages/ansible/plugins/connection/winrm.py"", line 271, in exec_command result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], from_exec=True) File ""/usr/local/lib/python2.7/site-packages/ansible/plugins/connection/winrm.py"", line 233, in _winrm_exec self.protocol.cleanup_command(self.shell_id, command_id) File ""/usr/local/lib/python2.7/site-packages/winrm/protocol.py"", line 307, in cleanup_command res = self.send_message(xmltodict.unparse(req)) File ""/usr/local/lib/python2.7/site-packages/winrm/protocol.py"", line 207, in send_message return self.transport.send_message(message) File ""/usr/local/lib/python2.7/site-packages/winrm/transport.py"", line 173, in send_message response = self.session.send(prepared_request, timeout=self.read_timeout_sec) File ""/usr/local/lib/python2.7/site-packages/requests/sessions.py"", line 596, in send r = adapter.send(request, **kwargs) File ""/usr/local/lib/python2.7/site-packages/requests/adapters.py"", line 479, in send raise ConnectTimeout(e, request=request) ConnectTimeout: HTTPSConnectionPool(host='172.16.169.179', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(, 'Connection to 172.16.169.179 timed out. (connect timeout=30)')) fatal: [172.16.169.179]: UNREACHABLE! => {""changed"": false, ""msg"": ""failed to exec cmd PowerShell -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -EncodedCommand UwBlAHQALQBTAHQAcgBpAGMAdABNAG8AZABlACAALQBWAGUAcgBzAGkAbwBuACAATABhAHQAZQBzAHQACgBUAHIAeQAKAHsACgAmACAAJwBDADoAXABVAHMAZQByAHMAXABhAG4AcwBpAGIAbABlAFwAQQBwAHAARABhAHQAYQBcAEwAbwBjAGEAbABcAFQAZQBtAHAAXABhAG4AcwBpAGIAbABlAC0AdABtAHAALQAxADQANwA3ADMAMQA2ADYAMAAwAC4ANwA3AC0AMQA2ADMAMgAxADgANgA5ADAAOQA2ADUAOQA4ADYAXAB3AGkAbgBfAHUAcwBlAHIALgBwAHMAMQAnAAoAfQAKAEMAYQB0AGMAaAAKAHsACgAkAF8AbwBiAGoAIAA9ACAAQAB7ACAAZgBhAGkAbABlAGQAIAA9ACAAJAB0AHIAdQBlACAAfQAKAEkAZgAgACgAJABfAC4ARQB4AGMAZQBwAHQAaQBvAG4ALgBHAGUAdABUAHkAcABlACkACgB7AAoAJABfAG8AYgBqAC4AQQBkAGQAKAAnAG0AcwBnACcALAAgACQAXwAuAEUAeABjAGUAcAB0AGkAbwBuAC4ATQBlAHMAcwBhAGcAZQApAAoAfQAKAEUAbABzAGUACgB7AAoAJABfAG8AYgBqAC4AQQBkAGQAKAAnAG0AcwBnACcALAAgACQAXwAuAFQAbwBTAHQAcgBpAG4AZwAoACkAKQAKAH0ACgBJAGYAIAAoACQAXwAuAEkAbgB2AG8AYwBhAHQAaQBvAG4ASQBuAGYAbwAuAFAAbwBzAGkAdABpAG8AbgBNAGUAcwBzAGEAZwBlACkACgB7AAoAJABfAG8AYgBqAC4AQQBkAGQAKAAnAGUAeABjAGUAcAB0AGkAbwBuACcALAAgACQAXwAuAEkAbgB2AG8AYwBhAHQAaQBvAG4ASQBuAGYAbwAuAFAAbwBzAGkAdABpAG8AbgBNAGUAcwBzAGEAZwBlACkACgB9AAoARQBsAHMAZQBJAGYAIAAoACQAXwAuAFMAYwByAGkAcAB0AFMAdABhAGMAawBUAHIAYQBjAGUAKQAKAHsACgAkAF8AbwBiAGoALgBBAGQAZAAoACcAZQB4AGMAZQBwAHQAaQBvAG4AJwAsACAAJABfAC4AUwBjAHIAaQBwAHQAUwB0AGEAYwBrAFQAcgBhAGMAZQApAAoAfQAKAFQAcgB5AAoAewAKACQAXwBvAGIAagAuAEEAZABkACgAJwBlAHIAcgBvAHIAXwByAGUAYwBvAHIAZAAnACwAIAAoACQAXwAgAHwAIABDAG8AbgB2AGUAcgB0AFQAbwAtAEoAcwBvAG4AIAB8ACAAQwBvAG4AdgBlAHIAdABGAHIAbwBtAC0ASgBzAG8AbgApACkACgB9AAoAQwBhAHQAYwBoAAoAewAKAH0ACgBFAGMAaABvACAAJABfAG8AYgBqACAAfAAgAEMAbwBuAHYAZQByAHQAVABvAC0ASgBzAG8AbgAgAC0AQwBvAG0AcAByAGUAcwBzACAALQBEAGUAcAB0AGgAIAA5ADkACgBFAHgAaQB0ACAAMQAKAH0ACgBGAGkAbgBhAGwAbAB5ACAAewAgAFIAZQBtAG8AdgBlAC0ASQB0AGUAbQAgACIAQwA6AFwAVQBzAGUAcgBzAFwAYQBuAHMAaQBiAGwAZQBcAEEAcABwAEQAYQB0AGEAXABMAG8AYwBhAGwAXABUAGUAbQBwAFwAYQBuAHMAaQBiAGwAZQAtAHQAbQBwAC0AMQA0ADcANwAzADEANgA2ADAAMAAuADcANwAtADEANgAzADIAMQA4ADYAOQAwADkANgA1ADkAOAA2ACIAIAAtAEYAbwByAGMAZQAgAC0AUgBlAGMAdQByAHMAZQAgAC0ARQByAHIAbwByAEEAYwB0AGkAbwBuACAAUwBpAGwAZQBuAHQAbAB5AEMAbwBuAHQAaQBuAHUAZQAgAH0A"", ""unreachable"": true} ``` ##### EXPECTED RESULTS Playbook continues run. ##### ACTUAL RESULTS Ansible connection is severed mid-play. ``` ``` ",1,ansible connection is sometimes dropped mid play on windows hosts issue type bug report component name win user ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific i am using the amazon created flavor of linux ami id ami summary for the win user task ansible fails to reach the remote host despite already making a connection and performing tasks steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used can t seem to reproduce this accurately and it s unfortunately not tied to one specific module i just so happen to be running the win user module task task path home user ansible roles create users tasks main yml traceback most recent call last file usr local lib site packages ansible plugins connection winrm py line in exec command result self winrm exec cmd parts cmd parts from exec true file usr local lib site packages ansible plugins connection winrm py line in winrm exec self protocol cleanup command self shell id command id file usr local lib site packages winrm protocol py line in cleanup command res self send message xmltodict unparse req file usr local lib site packages winrm protocol py line in send message return self transport send message message file usr local lib site packages winrm transport py line in send message response self session send prepared request timeout self read timeout sec file usr local lib site packages requests sessions py line in send r adapter send request kwargs file usr local lib site packages requests adapters py line in send raise connecttimeout e request request connecttimeout httpsconnectionpool host port max retries exceeded with url wsman caused by connecttimeouterror connection to timed out connect timeout fatal unreachable changed false msg failed to exec cmd powershell noprofile noninteractive executionpolicy unrestricted encodedcommand unreachable true expected results playbook continues run actual results ansible connection is severed mid play ,1 1655,6573991771.0,IssuesEvent,2017-09-11 10:59:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,authorized_key: pull keys from git server before the module is copied to the target machine,affects_2.2 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME module: authorized_key ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/username/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None which affect module behaviour. ##### OS / ENVIRONMENT N/A ##### SUMMARY In my company we are using a local git repository server (gitlab) and very few servers are able to access it. The majority of servers don't have network access to our local gitlab instance since we use it exclusively for ansible. So when i use the authorized_key module to deploy ssh keys and tell it to pull the keys from our gitlab instance (https://gitlab_server/{{ username }}.keys) the servers that can't access our gitlab instance cannot pull the keys. I understand that the module is copied to the target machine first and then executed, but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine. sorry if this is to much to ask and i know there are other ways to deploy ssh keys, but i find the ability to provide the keys from URL very useful and it seems useless if target servers cannot access the git server to get the keys. ##### STEPS TO REPRODUCE Try to deploy the keys to a target that cannot access the git server. ``` - name: ""Deploy public ssh key for username"" authorized_key: user: ""username"" key: ""https://gitlab_server/username.keys"" exclusive: yes validate_certs: no state: present ``` ##### EXPECTED RESULTS ``` changed: [ansible_host] ``` ##### ACTUAL RESULTS Because the target server cannot access the local git server the following error appears. ``` fatal: [ansible_host]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""exclusive"": true, ""key"": ""https://gitlab_server/username.keys"", ""key_options"": null, ""manage_dir"": true, ""path"": null, ""state"": ""present"", ""unique"": false, ""user"": ""username"", ""validate_certs"": false }, ""module_name"": ""authorized_key"" }, ""msg"": ""Error getting key from: https://gitlab_server/username.keys"" } ``` ",True,"authorized_key: pull keys from git server before the module is copied to the target machine - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME module: authorized_key ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/username/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None which affect module behaviour. ##### OS / ENVIRONMENT N/A ##### SUMMARY In my company we are using a local git repository server (gitlab) and very few servers are able to access it. The majority of servers don't have network access to our local gitlab instance since we use it exclusively for ansible. So when i use the authorized_key module to deploy ssh keys and tell it to pull the keys from our gitlab instance (https://gitlab_server/{{ username }}.keys) the servers that can't access our gitlab instance cannot pull the keys. I understand that the module is copied to the target machine first and then executed, but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine. sorry if this is to much to ask and i know there are other ways to deploy ssh keys, but i find the ability to provide the keys from URL very useful and it seems useless if target servers cannot access the git server to get the keys. ##### STEPS TO REPRODUCE Try to deploy the keys to a target that cannot access the git server. ``` - name: ""Deploy public ssh key for username"" authorized_key: user: ""username"" key: ""https://gitlab_server/username.keys"" exclusive: yes validate_certs: no state: present ``` ##### EXPECTED RESULTS ``` changed: [ansible_host] ``` ##### ACTUAL RESULTS Because the target server cannot access the local git server the following error appears. ``` fatal: [ansible_host]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""exclusive"": true, ""key"": ""https://gitlab_server/username.keys"", ""key_options"": null, ""manage_dir"": true, ""path"": null, ""state"": ""present"", ""unique"": false, ""user"": ""username"", ""validate_certs"": false }, ""module_name"": ""authorized_key"" }, ""msg"": ""Error getting key from: https://gitlab_server/username.keys"" } ``` ",1,authorized key pull keys from git server before the module is copied to the target machine issue type feature idea component name module authorized key ansible version ansible config file home username ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none which affect module behaviour os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary in my company we are using a local git repository server gitlab and very few servers are able to access it the majority of servers don t have network access to our local gitlab instance since we use it exclusively for ansible so when i use the authorized key module to deploy ssh keys and tell it to pull the keys from our gitlab instance username keys the servers that can t access our gitlab instance cannot pull the keys i understand that the module is copied to the target machine first and then executed but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine sorry if this is to much to ask and i know there are other ways to deploy ssh keys but i find the ability to provide the keys from url very useful and it seems useless if target servers cannot access the git server to get the keys steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used try to deploy the keys to a target that cannot access the git server name deploy public ssh key for username authorized key user username key exclusive yes validate certs no state present expected results changed actual results because the target server cannot access the local git server the following error appears fatal failed changed false failed true invocation module args exclusive true key key options null manage dir true path null state present unique false user username validate certs false module name authorized key msg error getting key from ,1 1899,6577549829.0,IssuesEvent,2017-09-12 01:41:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ability to specify multiple AZ's when creating multiple cache nodes with elasticache module,affects_2.0 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type: - Feature Idea ##### Plugin Name: module: elasticache ##### Ansible Version: ansible 2.0.1.0 ##### Ansible Configuration: [defaults] force_color = 1 hostfile = /etc/ansible/hosts library = /usr/share/ansible nocows = 1 ##### Environment: N/A ##### Summary: The Ansible elasticache module does not support spreading cache nodes across different availability zones. ##### Steps To Reproduce: The ""zone"" parameter for the elasticache module should support specifying a list of AZ's when multiple cache nodes are being created. ##### Expected Results: Each cache node will be distributed among the different AZ's listed for the ""zone"" parameter. ##### Actual Results: Currently the elasticache module allows specifying one AZ and all cache nodes are put into the same AZ which decreases high availability. ",True,"Ability to specify multiple AZ's when creating multiple cache nodes with elasticache module - ##### Issue Type: - Feature Idea ##### Plugin Name: module: elasticache ##### Ansible Version: ansible 2.0.1.0 ##### Ansible Configuration: [defaults] force_color = 1 hostfile = /etc/ansible/hosts library = /usr/share/ansible nocows = 1 ##### Environment: N/A ##### Summary: The Ansible elasticache module does not support spreading cache nodes across different availability zones. ##### Steps To Reproduce: The ""zone"" parameter for the elasticache module should support specifying a list of AZ's when multiple cache nodes are being created. ##### Expected Results: Each cache node will be distributed among the different AZ's listed for the ""zone"" parameter. ##### Actual Results: Currently the elasticache module allows specifying one AZ and all cache nodes are put into the same AZ which decreases high availability. ",1,ability to specify multiple az s when creating multiple cache nodes with elasticache module issue type feature idea plugin name module elasticache ansible version ansible ansible configuration force color hostfile etc ansible hosts library usr share ansible nocows environment n a summary the ansible elasticache module does not support spreading cache nodes across different availability zones steps to reproduce the zone parameter for the elasticache module should support specifying a list of az s when multiple cache nodes are being created expected results each cache node will be distributed among the different az s listed for the zone parameter actual results currently the elasticache module allows specifying one az and all cache nodes are put into the same az which decreases high availability ,1 1707,6574435760.0,IssuesEvent,2017-09-11 12:53:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible 2.2: docker_service 'Error starting project - ',affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /home/jhoeve-a/GitCollections/authdns-ansible-code/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION In version 2.2 I use: ``` - name: starting bind9 in docker docker_service: pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` In version 2.1.2 I used (due to lack of pull statement): ``` - name: pull image bind9 docker_image: name: docker.solvinity.net/bind9 pull: yes force: yes tags: - docker - name: starting bind9 in docker docker_service: # pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` ##### OS / ENVIRONMENT Ubuntu 16.04 / Ansible 2.2 from ppa ##### SUMMARY It used to work in 2.1.2 but now fails. ##### STEPS TO REPRODUCE Simply update to Ansible 2.2 and run playbook with snippet above. ##### EXPECTED RESULTS Upgrade / Start docker container ##### ACTUAL RESULTS ``` fatal: [lnx2346vm.internal.asp4all.nl]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": { ""services"": { ""bind9"": { ""image"": ""docker.solvinity.net/bind9"", ""logging"": { ""driver"": ""syslog"", ""options"": { ""syslog-facility"": ""local6"", ""tag"": ""bind9"" } }, ""network_mode"": ""host"", ""restart"": ""always"", ""volumes"": [ ""/export/bind/chroot/usr/local/bind/data:/usr/local/bind/data"", ""/export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc"" ] } }, ""version"": ""2"" }, ""dependencies"": true, ""docker_host"": null, ""files"": null, ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": ""bind9"", ""project_src"": null, ""pull"": true, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": false, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 120, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Error starting project - "" } ```",True,"ansible 2.2: docker_service 'Error starting project - ' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /home/jhoeve-a/GitCollections/authdns-ansible-code/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION In version 2.2 I use: ``` - name: starting bind9 in docker docker_service: pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` In version 2.1.2 I used (due to lack of pull statement): ``` - name: pull image bind9 docker_image: name: docker.solvinity.net/bind9 pull: yes force: yes tags: - docker - name: starting bind9 in docker docker_service: # pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` ##### OS / ENVIRONMENT Ubuntu 16.04 / Ansible 2.2 from ppa ##### SUMMARY It used to work in 2.1.2 but now fails. ##### STEPS TO REPRODUCE Simply update to Ansible 2.2 and run playbook with snippet above. ##### EXPECTED RESULTS Upgrade / Start docker container ##### ACTUAL RESULTS ``` fatal: [lnx2346vm.internal.asp4all.nl]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": { ""services"": { ""bind9"": { ""image"": ""docker.solvinity.net/bind9"", ""logging"": { ""driver"": ""syslog"", ""options"": { ""syslog-facility"": ""local6"", ""tag"": ""bind9"" } }, ""network_mode"": ""host"", ""restart"": ""always"", ""volumes"": [ ""/export/bind/chroot/usr/local/bind/data:/usr/local/bind/data"", ""/export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc"" ] } }, ""version"": ""2"" }, ""dependencies"": true, ""docker_host"": null, ""files"": null, ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": ""bind9"", ""project_src"": null, ""pull"": true, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": false, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 120, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Error starting project - "" } ```",1,ansible docker service error starting project issue type bug report component name docker service ansible version ansible config file home jhoeve a gitcollections authdns ansible code ansible cfg configured module search path default w o overrides configuration in version i use name starting in docker docker service pull yes project name timeout definition version services restart always logging driver syslog options syslog facility tag image docker solvinity net network mode host volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc register output tags docker in version i used due to lack of pull statement name pull image docker image name docker solvinity net pull yes force yes tags docker name starting in docker docker service pull yes project name timeout definition version services restart always logging driver syslog options syslog facility tag image docker solvinity net network mode host volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc register output tags docker os environment ubuntu ansible from ppa summary it used to work in but now fails steps to reproduce simply update to ansible and run playbook with snippet above expected results upgrade start docker container actual results fatal failed changed false failed true invocation module args api version null build false cacert path null cert path null debug false definition services image docker solvinity net logging driver syslog options syslog facility tag network mode host restart always volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc version dependencies true docker host null files null filter logger false hostname check false key path null nocache false project name project src null pull true recreate smart remove images null remove orphans false remove volumes false restarted false scale null services null ssl version null state present stopped false timeout tls null tls hostname null tls verify null module name docker service msg error starting project ,1 1854,6577396854.0,IssuesEvent,2017-09-12 00:37:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,(possible data loss?) git does reset --hard even with force: no,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module git ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION ``` executable = /bin/bash command_warnings = true ``` ##### OS / ENVIRONMENT Host: OS X El Capitan Remote: Ubuntu 16.04 (This task, however, runs with connection: local) ##### SUMMARY I am running the task below, which leads to three problems: - it runs a `git reset --hard`, even when I explicitly set `force: false` - I did not expect the repository to be reset. I now understand the reasoning (describing a predictable state), but I believe a hard reset deserves some protection even against confused newbies like me. - This is a dataloss bug: The user loses all local modifications to the repository that have not been commited. - The command actually fails, because it apparently mixes up `remote` and `version` (although this could be my mistake) ##### STEPS TO REPRODUCE Run the following task ``` - name: pull updates from git repository git: repo: ""{{ git_repository }}"" update: yes force: no dest: ""{{ path_local }}"" clone: no remote: ""{{ git_remote }}"" version: ""{{ git_version }}"" ``` ##### EXPECTED RESULTS I expected a `git pull {{ remote }}` to be run, but I would now suggest to check for local modifications and fail (which I already had as a previous task) ##### ACTUAL RESULTS Note: The major problem is the command it's trying to run, not that it failed. ``` TASK [je: pull updates from git repository repo={{ git_repository }}, version={{ git_version }}, force=False, dest={{ path_local }}, remote={{ git_remote }}, clone=False, update=True] *** task path: /git/tasks/synchronize.yml:12 ESTABLISH LOCAL CONNECTION FOR USER: ma EXEC /bin/bash -c 'LANG=en_EN LC_ALL=en_EN LC_MESSAGES=en_EN /usr/bin/python' fatal: []: FAILED! => {""changed"": false, ""cmd"": ""/usr/local/bin/git reset --hard origin"", ""failed"": true, ""invocation"": {""module_args"": {""accept_hostkey"": false, ""bare"": false, ""clone"": false, ""depth"": null, ""dest"": ""/path/projects/"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": null, ""remote"": ""origin"", ""repo"": """", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""HEAD""}, ""module_name"": ""git""}, ""msg"": ""Failed to checkout branch master"", ""rc"": 128, ""stderr"": ""fatal: ambiguous argument 'origin': unknown revision or path not in the working tree.\nUse '--' to separate paths from revisions, like this:\n'git [...] -- [...]'\n"", ""stdout"": """", ""stdout_lines"": []} ``` ",True,"(possible data loss?) git does reset --hard even with force: no - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME module git ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION ``` executable = /bin/bash command_warnings = true ``` ##### OS / ENVIRONMENT Host: OS X El Capitan Remote: Ubuntu 16.04 (This task, however, runs with connection: local) ##### SUMMARY I am running the task below, which leads to three problems: - it runs a `git reset --hard`, even when I explicitly set `force: false` - I did not expect the repository to be reset. I now understand the reasoning (describing a predictable state), but I believe a hard reset deserves some protection even against confused newbies like me. - This is a dataloss bug: The user loses all local modifications to the repository that have not been commited. - The command actually fails, because it apparently mixes up `remote` and `version` (although this could be my mistake) ##### STEPS TO REPRODUCE Run the following task ``` - name: pull updates from git repository git: repo: ""{{ git_repository }}"" update: yes force: no dest: ""{{ path_local }}"" clone: no remote: ""{{ git_remote }}"" version: ""{{ git_version }}"" ``` ##### EXPECTED RESULTS I expected a `git pull {{ remote }}` to be run, but I would now suggest to check for local modifications and fail (which I already had as a previous task) ##### ACTUAL RESULTS Note: The major problem is the command it's trying to run, not that it failed. ``` TASK [je: pull updates from git repository repo={{ git_repository }}, version={{ git_version }}, force=False, dest={{ path_local }}, remote={{ git_remote }}, clone=False, update=True] *** task path: /git/tasks/synchronize.yml:12 ESTABLISH LOCAL CONNECTION FOR USER: ma EXEC /bin/bash -c 'LANG=en_EN LC_ALL=en_EN LC_MESSAGES=en_EN /usr/bin/python' fatal: []: FAILED! => {""changed"": false, ""cmd"": ""/usr/local/bin/git reset --hard origin"", ""failed"": true, ""invocation"": {""module_args"": {""accept_hostkey"": false, ""bare"": false, ""clone"": false, ""depth"": null, ""dest"": ""/path/projects/"", ""executable"": null, ""force"": false, ""key_file"": null, ""recursive"": true, ""reference"": null, ""refspec"": null, ""remote"": ""origin"", ""repo"": """", ""ssh_opts"": null, ""track_submodules"": false, ""update"": true, ""verify_commit"": false, ""version"": ""HEAD""}, ""module_name"": ""git""}, ""msg"": ""Failed to checkout branch master"", ""rc"": 128, ""stderr"": ""fatal: ambiguous argument 'origin': unknown revision or path not in the working tree.\nUse '--' to separate paths from revisions, like this:\n'git [...] -- [...]'\n"", ""stdout"": """", ""stdout_lines"": []} ``` ",1, possible data loss git does reset hard even with force no issue type bug report component name module git ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables executable bin bash command warnings true os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific host os x el capitan remote ubuntu this task however runs with connection local summary i am running the task below which leads to three problems it runs a git reset hard even when i explicitly set force false i did not expect the repository to be reset i now understand the reasoning describing a predictable state but i believe a hard reset deserves some protection even against confused newbies like me this is a dataloss bug the user loses all local modifications to the repository that have not been commited the command actually fails because it apparently mixes up remote and version although this could be my mistake steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run the following task name pull updates from git repository git repo git repository update yes force no dest path local clone no remote git remote version git version expected results i expected a git pull remote to be run but i would now suggest to check for local modifications and fail which i already had as a previous task actual results note the major problem is the command it s trying to run not that it failed task je pull updates from git repository repo git repository version git version force false dest path local remote git remote clone false update true task path git tasks synchronize yml establish local connection for user ma exec bin bash c lang en en lc all en en lc messages en en usr bin python fatal failed changed false cmd usr local bin git reset hard origin failed true invocation module args accept hostkey false bare false clone false depth null dest path projects executable null force false key file null recursive true reference null refspec null remote origin repo ssh opts null track submodules false update true verify commit false version head module name git msg failed to checkout branch master rc stderr fatal ambiguous argument origin unknown revision or path not in the working tree nuse to separate paths from revisions like this n git n stdout stdout lines ,1 1670,6574082601.0,IssuesEvent,2017-09-11 11:24:25,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"service/systemd module cannot enable new (""missing"") services",affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report (regression) ##### COMPONENT NAME module `service`/`systemd` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/lukas/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing relevant there ##### OS / ENVIRONMENT both Debian stretch/sid ##### SUMMARY I used to use the `service` module to enable (i.e. add) new services (i.e. unit files) with Ansible 2.1. With 2.2, the corresponding Playbook fails with ""Could not find the requested service \\""'/tmp/dummy.service'\\"""". ##### STEPS TO REPRODUCE run the Playbook: ``` - name: add a new unit file hosts: localhost become: yes vars: f: /tmp/dummy.service tasks: - blockinfile: dest: ""{{ f }}"" create: yes block: | [Service] ExecStart=echo Type=oneshot - service: name=""{{ f }}"" enabled=yes ``` ##### EXPECTED RESULTS Service gets symlinked and enabled accordingly (like in 2.1). ##### ACTUAL RESULTS ``` $ ansible-playbook /tmp/playbook.yml -Kvvvv Using /home/lukas/.ansible.cfg as config file SUDO password: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: playbook.yml ********************************************************* 1 plays in /tmp/playbook.yml PLAY [add a new unit file] ***************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: lukas <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p ""[sudo via ansible, key=olalnnvscferlfdemtugfdgjqrneqipt] password: "" -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-olalnnvscferlfdemtugfdgjqrneqipt; /usr/bin/python'""'""' && sleep 0' ok: [localhost] TASK [blockinfile] ************************************************************* task path: /tmp/playbook.yml:10 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/extras/files/blockinfile.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: lukas <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p ""[sudo via ansible, key=jhuyttrpjgbqflakdvmfhpjossmgctnj] password: "" -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-jhuyttrpjgbqflakdvmfhpjossmgctnj; /usr/bin/python'""'""' && sleep 0' ok: [localhost] => { ""changed"": false, ""invocation"": { ""module_args"": { ""backup"": false, ""block"": ""[Service]\nExecStart=echo\nType=oneshot\n"", ""content"": null, ""create"": true, ""delimiter"": null, ""dest"": ""/tmp/dummy.service"", ""directory_mode"": null, ""follow"": false, ""force"": null, ""group"": null, ""insertafter"": null, ""insertbefore"": null, ""marker"": ""# {mark} ANSIBLE MANAGED BLOCK"", ""mode"": null, ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": ""present"", ""unsafe_writes"": null, ""validate"": null }, ""module_name"": ""blockinfile"" }, ""msg"": """" } TASK [service] ***************************************************************** task path: /tmp/playbook.yml:18 Running systemd Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/systemd.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: lukas <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p ""[sudo via ansible, key=jihnhcdcrlgqulqxcpjdplduksltuceb] password: "" -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-jihnhcdcrlgqulqxcpjdplduksltuceb; /usr/bin/python'""'""' && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""daemon_reload"": false, ""enabled"": true, ""masked"": null, ""name"": ""/tmp/dummy.service"", ""state"": null, ""user"": false } }, ""msg"": ""Could not find the requested service \""'/tmp/dummy.service'\"": "" } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=1 ``` ##### WORKAROUND ``` - register: systemctl_result command: systemctl enable /tmp/dummy.service changed_when: systemctl_result.stdout != """" ```",True,"service/systemd module cannot enable new (""missing"") services - ##### ISSUE TYPE - Bug Report (regression) ##### COMPONENT NAME module `service`/`systemd` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/lukas/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing relevant there ##### OS / ENVIRONMENT both Debian stretch/sid ##### SUMMARY I used to use the `service` module to enable (i.e. add) new services (i.e. unit files) with Ansible 2.1. With 2.2, the corresponding Playbook fails with ""Could not find the requested service \\""'/tmp/dummy.service'\\"""". ##### STEPS TO REPRODUCE run the Playbook: ``` - name: add a new unit file hosts: localhost become: yes vars: f: /tmp/dummy.service tasks: - blockinfile: dest: ""{{ f }}"" create: yes block: | [Service] ExecStart=echo Type=oneshot - service: name=""{{ f }}"" enabled=yes ``` ##### EXPECTED RESULTS Service gets symlinked and enabled accordingly (like in 2.1). ##### ACTUAL RESULTS ``` $ ansible-playbook /tmp/playbook.yml -Kvvvv Using /home/lukas/.ansible.cfg as config file SUDO password: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: playbook.yml ********************************************************* 1 plays in /tmp/playbook.yml PLAY [add a new unit file] ***************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: lukas <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p ""[sudo via ansible, key=olalnnvscferlfdemtugfdgjqrneqipt] password: "" -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-olalnnvscferlfdemtugfdgjqrneqipt; /usr/bin/python'""'""' && sleep 0' ok: [localhost] TASK [blockinfile] ************************************************************* task path: /tmp/playbook.yml:10 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/extras/files/blockinfile.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: lukas <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p ""[sudo via ansible, key=jhuyttrpjgbqflakdvmfhpjossmgctnj] password: "" -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-jhuyttrpjgbqflakdvmfhpjossmgctnj; /usr/bin/python'""'""' && sleep 0' ok: [localhost] => { ""changed"": false, ""invocation"": { ""module_args"": { ""backup"": false, ""block"": ""[Service]\nExecStart=echo\nType=oneshot\n"", ""content"": null, ""create"": true, ""delimiter"": null, ""dest"": ""/tmp/dummy.service"", ""directory_mode"": null, ""follow"": false, ""force"": null, ""group"": null, ""insertafter"": null, ""insertbefore"": null, ""marker"": ""# {mark} ANSIBLE MANAGED BLOCK"", ""mode"": null, ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": ""present"", ""unsafe_writes"": null, ""validate"": null }, ""module_name"": ""blockinfile"" }, ""msg"": """" } TASK [service] ***************************************************************** task path: /tmp/playbook.yml:18 Running systemd Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/systemd.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: lukas <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p ""[sudo via ansible, key=jihnhcdcrlgqulqxcpjdplduksltuceb] password: "" -u root /bin/sh -c '""'""'echo BECOME-SUCCESS-jihnhcdcrlgqulqxcpjdplduksltuceb; /usr/bin/python'""'""' && sleep 0' fatal: [localhost]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""daemon_reload"": false, ""enabled"": true, ""masked"": null, ""name"": ""/tmp/dummy.service"", ""state"": null, ""user"": false } }, ""msg"": ""Could not find the requested service \""'/tmp/dummy.service'\"": "" } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=1 ``` ##### WORKAROUND ``` - register: systemctl_result command: systemctl enable /tmp/dummy.service changed_when: systemctl_result.stdout != """" ```",1,service systemd module cannot enable new missing services issue type bug report regression component name module service systemd ansible version ansible config file home lukas ansible cfg configured module search path default w o overrides configuration nothing relevant there os environment both debian stretch sid summary i used to use the service module to enable i e add new services i e unit files with ansible with the corresponding playbook fails with could not find the requested service tmp dummy service steps to reproduce run the playbook name add a new unit file hosts localhost become yes vars f tmp dummy service tasks blockinfile dest f create yes block execstart echo type oneshot service name f enabled yes expected results service gets symlinked and enabled accordingly like in actual results ansible playbook tmp playbook yml kvvvv using home lukas ansible cfg as config file sudo password loading callback plugin default of type stdout from usr lib dist packages ansible plugins callback init pyc playbook playbook yml plays in tmp playbook yml play task using module file usr lib dist packages ansible modules core system setup py establish local connection for user lukas exec bin sh c sudo h s p password u root bin sh c echo become success olalnnvscferlfdemtugfdgjqrneqipt usr bin python sleep ok task task path tmp playbook yml using module file usr lib dist packages ansible modules extras files blockinfile py establish local connection for user lukas exec bin sh c sudo h s p password u root bin sh c echo become success jhuyttrpjgbqflakdvmfhpjossmgctnj usr bin python sleep ok changed false invocation module args backup false block nexecstart echo ntype oneshot n content null create true delimiter null dest tmp dummy service directory mode null follow false force null group null insertafter null insertbefore null marker mark ansible managed block mode null owner null regexp null remote src null selevel null serole null setype null seuser null src null state present unsafe writes null validate null module name blockinfile msg task task path tmp playbook yml running systemd using module file usr lib dist packages ansible modules core system systemd py establish local connection for user lukas exec bin sh c sudo h s p password u root bin sh c echo become success jihnhcdcrlgqulqxcpjdplduksltuceb usr bin python sleep fatal failed changed false failed true invocation module args daemon reload false enabled true masked null name tmp dummy service state null user false msg could not find the requested service tmp dummy service play recap localhost ok changed unreachable failed workaround register systemctl result command systemctl enable tmp dummy service changed when systemctl result stdout ,1 793,4390064153.0,IssuesEvent,2016-08-09 01:05:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,reopened,File mode 0755 fails for a tar in `unarchive`,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Module: `unarchive` ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY `unarchive` fails with a `mode` of `0755`. ##### STEPS TO REPRODUCE Create a `tar.gz` file (or any format that will be processed by [`TgzArchive`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L537)): ``` $ mkdir mydir $ tar -czf mydir.tar.gz mydir ``` Use a playbook similar to the following (the important bit is the file mode): ``` - hosts: localhost tasks: - name: extract dir unarchive: src: ""./mydir.tar.gz"" dest: ""./"" mode: 0755 ``` ##### EXPECTED RESULTS I would expect the playbook to run without error. ##### ACTUAL RESULTS ``` $ ansible-playbook myplaybook.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [extract dir] ************************************************************* fatal: [localhost]: FAILED! => {""changed"": false, ""dest"": ""./"", ""failed"": true, ""gid"": 1000, ""group"": ""jesse"", ""handler"": ""TgzArchive"", ""mode"": ""0775"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: './mydir'"", ""owner"": ""jesse"", ""size"": 4096, ""src"": ""/home/jesse/.ansible/tmp/ansible-tmp-1467149922.64-77975041845860/source"", ""state"": ""directory"", ""uid"": 1000} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @myplaybook.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` Running the same playbook, but replacing the file mode `0755` with `0777` runs without error: ``` $ ansible-playbook myplaybook.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [extract dir] ************************************************************* changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` The error message in itself does not tell us a lot, but if we add some debug traces ([`check_results`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L778)), we see that the following `tar` command is being executed: ``` /bin/tar -C ""./"" -dz --mode=""493"" -f ""/home/jesse/.ansible/tmp/ansible-tmp-1467150531.52-232348981801230/source"" ``` and that `tar` is not happy about it: ``` /bin/tar: Invalid mode given on option /bin/tar: Error is not recoverable: exiting now ``` The integer `493` passed to tar as a mode happens to be the decimal representation of `0755`. Tar complains because this is an invalid octal number for the `mode`. The file mode is being passed as decimal because it is passed to `tar` after casting it directly to a string (i.e. results in a decimal representation) in [`is_unarchived`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L585) and [`unarchive`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L628). However, it looks like `tar` does not even care about `--mode` on `--diff` or `--extract` (emphasis is mine): > `--mode=permissions' **When adding files to an archive**, tar will use permissions for the archive members, rather than the permissions from the files. My understanding is that passing this parameter should not have an impact on the files handled by `-d` or `-x`. This would explain why the issue was hard to encounter; only a file mode that would yield an invalid octal number *after* a conversion to a decimal number would make it fail. It does not have an effect on the final mode set. Is there a particular reason for passing `--mode` when calling `tar -x` or `tar -d`? If this is indeed a bug and I am not misconfiguring my playbook in some way, the workaround is rather simple -- using a symbolic representation for the file mode works like a charm. If possible, I would be pleased to work on a PR to fix the issue in order to get involved in the project and its contribution process.",True,"File mode 0755 fails for a tar in `unarchive` - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Module: `unarchive` ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY `unarchive` fails with a `mode` of `0755`. ##### STEPS TO REPRODUCE Create a `tar.gz` file (or any format that will be processed by [`TgzArchive`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L537)): ``` $ mkdir mydir $ tar -czf mydir.tar.gz mydir ``` Use a playbook similar to the following (the important bit is the file mode): ``` - hosts: localhost tasks: - name: extract dir unarchive: src: ""./mydir.tar.gz"" dest: ""./"" mode: 0755 ``` ##### EXPECTED RESULTS I would expect the playbook to run without error. ##### ACTUAL RESULTS ``` $ ansible-playbook myplaybook.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [extract dir] ************************************************************* fatal: [localhost]: FAILED! => {""changed"": false, ""dest"": ""./"", ""failed"": true, ""gid"": 1000, ""group"": ""jesse"", ""handler"": ""TgzArchive"", ""mode"": ""0775"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: './mydir'"", ""owner"": ""jesse"", ""size"": 4096, ""src"": ""/home/jesse/.ansible/tmp/ansible-tmp-1467149922.64-77975041845860/source"", ""state"": ""directory"", ""uid"": 1000} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @myplaybook.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ``` Running the same playbook, but replacing the file mode `0755` with `0777` runs without error: ``` $ ansible-playbook myplaybook.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [extract dir] ************************************************************* changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` The error message in itself does not tell us a lot, but if we add some debug traces ([`check_results`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L778)), we see that the following `tar` command is being executed: ``` /bin/tar -C ""./"" -dz --mode=""493"" -f ""/home/jesse/.ansible/tmp/ansible-tmp-1467150531.52-232348981801230/source"" ``` and that `tar` is not happy about it: ``` /bin/tar: Invalid mode given on option /bin/tar: Error is not recoverable: exiting now ``` The integer `493` passed to tar as a mode happens to be the decimal representation of `0755`. Tar complains because this is an invalid octal number for the `mode`. The file mode is being passed as decimal because it is passed to `tar` after casting it directly to a string (i.e. results in a decimal representation) in [`is_unarchived`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L585) and [`unarchive`](https://github.com/ansible/ansible-modules-core/blob/devel/files/unarchive.py#L628). However, it looks like `tar` does not even care about `--mode` on `--diff` or `--extract` (emphasis is mine): > `--mode=permissions' **When adding files to an archive**, tar will use permissions for the archive members, rather than the permissions from the files. My understanding is that passing this parameter should not have an impact on the files handled by `-d` or `-x`. This would explain why the issue was hard to encounter; only a file mode that would yield an invalid octal number *after* a conversion to a decimal number would make it fail. It does not have an effect on the final mode set. Is there a particular reason for passing `--mode` when calling `tar -x` or `tar -d`? If this is indeed a bug and I am not misconfiguring my playbook in some way, the workaround is rather simple -- using a symbolic representation for the file mode works like a charm. If possible, I would be pleased to work on a PR to fix the issue in order to get involved in the project and its contribution process.",1,file mode fails for a tar in unarchive issue type bug report component name module unarchive ansible version ansible configuration n a os environment n a summary unarchive fails with a mode of steps to reproduce create a tar gz file or any format that will be processed by mkdir mydir tar czf mydir tar gz mydir use a playbook similar to the following the important bit is the file mode hosts localhost tasks name extract dir unarchive src mydir tar gz dest mode expected results i would expect the playbook to run without error actual results ansible playbook myplaybook yml play task ok task fatal failed changed false dest failed true gid group jesse handler tgzarchive mode msg unexpected error when accessing exploded file no such file or directory mydir owner jesse size src home jesse ansible tmp ansible tmp source state directory uid no more hosts left to retry use limit myplaybook retry play recap localhost ok changed unreachable failed running the same playbook but replacing the file mode with runs without error ansible playbook myplaybook yml play task ok task changed play recap localhost ok changed unreachable failed the error message in itself does not tell us a lot but if we add some debug traces we see that the following tar command is being executed bin tar c dz mode f home jesse ansible tmp ansible tmp source and that tar is not happy about it bin tar invalid mode given on option bin tar error is not recoverable exiting now the integer passed to tar as a mode happens to be the decimal representation of tar complains because this is an invalid octal number for the mode the file mode is being passed as decimal because it is passed to tar after casting it directly to a string i e results in a decimal representation in and however it looks like tar does not even care about mode on diff or extract emphasis is mine mode permissions when adding files to an archive tar will use permissions for the archive members rather than the permissions from the files my understanding is that passing this parameter should not have an impact on the files handled by d or x this would explain why the issue was hard to encounter only a file mode that would yield an invalid octal number after a conversion to a decimal number would make it fail it does not have an effect on the final mode set is there a particular reason for passing mode when calling tar x or tar d if this is indeed a bug and i am not misconfiguring my playbook in some way the workaround is rather simple using a symbolic representation for the file mode works like a charm if possible i would be pleased to work on a pr to fix the issue in order to get involved in the project and its contribution process ,1 1600,6572381055.0,IssuesEvent,2017-09-11 01:52:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,postgresql_db add parameter to set 'connection database' ,affects_2.3 feature_idea waiting_on_maintainer,"##### ISSUE TYPE Feature Idea ##### COMPONENT NAME postgresql_db ##### ANSIBLE VERSION N/A ##### SUMMARY If ""postgres"" db doesn't exist by default we cannot create any new db. DB's can only be created if postgres db exists by default. https://github.com/ansible/ansible-modules-core/blob/devel/database/postgresql/postgresql_db.py#L273 As larsks from #ansible in freenode said : ""in that line, postgres could either (a) be changed to 'template0', or (b) you could add a module parameter that lets you use an arbitrary alternate name when necessary."" What could be done to fix this problem ? If you need any more info on how to replicate this bug but you can do something like this : - Terminal # dropdb postgres - in main.yml - postgresql_db: name=postgres or - postgresql_db: name=foobar But if i do from the terminal : # createdb postgres It will accept Thanks in advance ",True,"postgresql_db add parameter to set 'connection database' - ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME postgresql_db ##### ANSIBLE VERSION N/A ##### SUMMARY If ""postgres"" db doesn't exist by default we cannot create any new db. DB's can only be created if postgres db exists by default. https://github.com/ansible/ansible-modules-core/blob/devel/database/postgresql/postgresql_db.py#L273 As larsks from #ansible in freenode said : ""in that line, postgres could either (a) be changed to 'template0', or (b) you could add a module parameter that lets you use an arbitrary alternate name when necessary."" What could be done to fix this problem ? If you need any more info on how to replicate this bug but you can do something like this : - Terminal # dropdb postgres - in main.yml - postgresql_db: name=postgres or - postgresql_db: name=foobar But if i do from the terminal : # createdb postgres It will accept Thanks in advance ",1,postgresql db add parameter to set connection database issue type feature idea component name postgresql db ansible version n a summary if postgres db doesn t exist by default we cannot create any new db db s can only be created if postgres db exists by default as larsks from ansible in freenode said in that line postgres could either a be changed to or b you could add a module parameter that lets you use an arbitrary alternate name when necessary what could be done to fix this problem if you need any more info on how to replicate this bug but you can do something like this terminal dropdb postgres in main yml postgresql db name postgres or postgresql db name foobar but if i do from the terminal createdb postgres it will accept thanks in advance ,1 839,4479326075.0,IssuesEvent,2016-08-27 14:51:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git module SHA version not work for clone,docs_report P3 waiting_on_maintainer,"In documentation: What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD, a branch name, or a tag name. But it not work for clone, because `git clone` not support this. So in this case better note about this in documentation. ",True,"git module SHA version not work for clone - In documentation: What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD, a branch name, or a tag name. But it not work for clone, because `git clone` not support this. So in this case better note about this in documentation. ",1,git module sha version not work for clone in documentation what version of the repository to check out this can be the full character sha hash the literal string head a branch name or a tag name but it not work for clone because git clone not support this so in this case better note about this in documentation ,1 876,4541032413.0,IssuesEvent,2016-09-09 16:25:21,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,junos_package.py: package_version undefined,affects_2.2 bug_report in progress networking P2 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_package ##### ANSIBLE VERSION devel ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/blame/devel/network/junos/junos_package.py#L141 `wants_ver = module.params['version'] or package_version(module)` I can't find anywhere in the `ansible/ansible` code base where `package_version` is defined ",True,"junos_package.py: package_version undefined - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_package ##### ANSIBLE VERSION devel ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/blame/devel/network/junos/junos_package.py#L141 `wants_ver = module.params['version'] or package_version(module)` I can't find anywhere in the `ansible/ansible` code base where `package_version` is defined ",1,junos package py package version undefined issue type bug report component name junos package ansible version devel configuration os environment summary wants ver module params or package version module i can t find anywhere in the ansible ansible code base where package version is defined ,1 1863,6577414013.0,IssuesEvent,2017-09-12 00:44:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ec2_lc support for ""PlacementTenancy"" option",affects_2.0 aws cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_lc ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY AWS cli and api supports ""Placement Tenancy"" in the launch config allowing to deploy dedicated ec2 instance. The ec2_lc is missing this option. ",True,"ec2_lc support for ""PlacementTenancy"" option - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_lc ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY AWS cli and api supports ""Placement Tenancy"" in the launch config allowing to deploy dedicated ec2 instance. The ec2_lc is missing this option. ",1, lc support for placementtenancy option issue type feature idea component name lc ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary aws cli and api supports placement tenancy in the launch config allowing to deploy dedicated instance the lc is missing this option ,1 1720,6574483942.0,IssuesEvent,2017-09-11 13:03:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ansible apt ignore cache_valid_time value,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/vagrant/my/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg: [defaults] hostfile = hosts ##### OS / ENVIRONMENT OS you are running Ansible from: Ubuntu 16.04 OS you are managing: Ubuntu 16.04 ##### SUMMARY After upgradig to ansible 2.2 I always get changes in apt module because it ignore **cache_valid_time** value. ##### STEPS TO REPRODUCE ``` test.yml: --- - hosts: localvm become: yes tasks: - name: Only run ""update_cache=yes"" if the last one is more than 3600 seconds ago apt: update_cache: yes cache_valid_time: 3600 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv ``` ##### EXPECTED RESULTS Update apt cache on first run, skip updating cache on second run. ##### ACTUAL RESULTS Always changes. ``` vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `"" && echo ansible-tmp-1478178800.59-26361197346329=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmpGz1Eb9 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-bblyfpmawwxwihkyhdzgsrwimfkjlzuk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ok: [192.168.60.4] TASK [Only run ""update_cache=yes"" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `"" && echo ansible-tmp-1478178801.29-209769775274469=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmpb8HOiL TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-soyskgemfitdsrhonujvdopjieqzexmq; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' changed: [192.168.60.4] => { ""cache_update_time"": 1478170123, ""cache_updated"": true, ""changed"": true, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 3600, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""only_upgrade"": false, ""package"": null, ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null }, ""module_name"": ""apt"" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `"" && echo ansible-tmp-1478178871.45-218992397586023=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmpv9o0e3 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-lwuttqhzswvnqlvkfcbraivcbuceisuz; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ok: [192.168.60.4] TASK [Only run ""update_cache=yes"" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `"" && echo ansible-tmp-1478178872.37-148384000832646=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmp3rCfzf TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-xhjxsxornuzelhyvlsiksuindfcmjlpx; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' changed: [192.168.60.4] => { ""cache_update_time"": 1478170123, ""cache_updated"": true, ""changed"": true, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 3600, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""only_upgrade"": false, ""package"": null, ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null }, ""module_name"": ""apt"" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 ``` It seems **cache_update_time** didn't updated.",True,"Ansible apt ignore cache_valid_time value - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/vagrant/my/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg: [defaults] hostfile = hosts ##### OS / ENVIRONMENT OS you are running Ansible from: Ubuntu 16.04 OS you are managing: Ubuntu 16.04 ##### SUMMARY After upgradig to ansible 2.2 I always get changes in apt module because it ignore **cache_valid_time** value. ##### STEPS TO REPRODUCE ``` test.yml: --- - hosts: localvm become: yes tasks: - name: Only run ""update_cache=yes"" if the last one is more than 3600 seconds ago apt: update_cache: yes cache_valid_time: 3600 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv ``` ##### EXPECTED RESULTS Update apt cache on first run, skip updating cache on second run. ##### ACTUAL RESULTS Always changes. ``` vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `"" && echo ansible-tmp-1478178800.59-26361197346329=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmpGz1Eb9 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-bblyfpmawwxwihkyhdzgsrwimfkjlzuk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ok: [192.168.60.4] TASK [Only run ""update_cache=yes"" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `"" && echo ansible-tmp-1478178801.29-209769775274469=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmpb8HOiL TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-soyskgemfitdsrhonujvdopjieqzexmq; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' changed: [192.168.60.4] => { ""cache_update_time"": 1478170123, ""cache_updated"": true, ""changed"": true, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 3600, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""only_upgrade"": false, ""package"": null, ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null }, ""module_name"": ""apt"" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `"" && echo ansible-tmp-1478178871.45-218992397586023=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmpv9o0e3 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-lwuttqhzswvnqlvkfcbraivcbuceisuz; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ok: [192.168.60.4] TASK [Only run ""update_cache=yes"" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `"" && echo ansible-tmp-1478178872.37-148384000832646=""` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `"" ) && sleep 0'""'""'' <192.168.60.4> PUT /tmp/tmp3rCfzf TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '""'""'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py && sleep 0'""'""'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-xhjxsxornuzelhyvlsiksuindfcmjlpx; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' changed: [192.168.60.4] => { ""cache_update_time"": 1478170123, ""cache_updated"": true, ""changed"": true, ""invocation"": { ""module_args"": { ""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": 3600, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""only_upgrade"": false, ""package"": null, ""purge"": false, ""state"": ""present"", ""update_cache"": true, ""upgrade"": null }, ""module_name"": ""apt"" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 ``` It seems **cache_update_time** didn't updated.",1,ansible apt ignore cache valid time value issue type bug report component name apt ansible version ansible config file home vagrant my ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible cfg hostfile hosts os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific os you are running ansible from ubuntu os you are managing ubuntu summary after upgradig to ansible i always get changes in apt module because it ignore cache valid time value steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used test yml hosts localvm become yes tasks name only run update cache yes if the last one is more than seconds ago apt update cache yes cache valid time vagrant ans contrl my ansible playbook test yml vvv expected results update apt cache on first run skip updating cache on second run actual results always changes vagrant ans contrl my ansible playbook test yml vvv using home vagrant my ansible cfg as config file playbook test yml plays in test yml play task using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success bblyfpmawwxwihkyhdzgsrwimfkjlzuk usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant my test yml using module file usr lib dist packages ansible modules core packaging os apt py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp apt py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp apt py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success soyskgemfitdsrhonujvdopjieqzexmq usr bin python home vagrant ansible tmp ansible tmp apt py rm rf home vagrant ansible tmp ansible tmp dev null sleep changed cache update time cache updated true changed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null only upgrade false package null purge false state present update cache true upgrade null module name apt play recap ok changed unreachable failed vagrant ans contrl my ansible playbook test yml vvv using home vagrant my ansible cfg as config file playbook test yml plays in test yml play task using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success lwuttqhzswvnqlvkfcbraivcbuceisuz usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant my test yml using module file usr lib dist packages ansible modules core packaging os apt py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp apt py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp apt py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success xhjxsxornuzelhyvlsiksuindfcmjlpx usr bin python home vagrant ansible tmp ansible tmp apt py rm rf home vagrant ansible tmp ansible tmp dev null sleep changed cache update time cache updated true changed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null only upgrade false package null purge false state present update cache true upgrade null module name apt play recap ok changed unreachable failed it seems cache update time didn t updated ,1 1708,6574437078.0,IssuesEvent,2017-09-11 12:53:49,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_image : make action explicit,affects_2.2 cloud docker feature_idea waiting_on_maintainer,"Hi, ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_images ##### ANSIBLE VERSION ``` root$ ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION Not relevant ##### OS / ENVIRONMENT Not relevant ##### SUMMARY The docker_images module is very difficult to use because there is no action attribute to tells what we want to do. The action is implicit (if path -> build, if tag sometimes it tags sometimes not, ...). It would be more easy to make the action explicit with an action attribute: tag -> tags (alias) an image, build -> build, ... ##### STEPS TO REPRODUCE Example of difficulty to use there: http://stackoverflow.com/questions/38169244/how-do-i-tag-a-local-docker-image-with-ansible-docker-image-module How to do the equivalent of: ``` docker tag ? ``` ",True,"docker_image : make action explicit - Hi, ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_images ##### ANSIBLE VERSION ``` root$ ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION Not relevant ##### OS / ENVIRONMENT Not relevant ##### SUMMARY The docker_images module is very difficult to use because there is no action attribute to tells what we want to do. The action is implicit (if path -> build, if tag sometimes it tags sometimes not, ...). It would be more easy to make the action explicit with an action attribute: tag -> tags (alias) an image, build -> build, ... ##### STEPS TO REPRODUCE Example of difficulty to use there: http://stackoverflow.com/questions/38169244/how-do-i-tag-a-local-docker-image-with-ansible-docker-image-module How to do the equivalent of: ``` docker tag ? ``` ",1,docker image make action explicit hi issue type feature idea component name docker images ansible version root ansible version ansible config file etc ansible ansible cfg configured module search path configuration not relevant os environment not relevant summary the docker images module is very difficult to use because there is no action attribute to tells what we want to do the action is implicit if path build if tag sometimes it tags sometimes not it would be more easy to make the action explicit with an action attribute tag tags alias an image build build steps to reproduce example of difficulty to use there how to do the equivalent of docker tag ,1 1206,5146081372.0,IssuesEvent,2017-01-12 23:38:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Failure while using htpasswd module,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME htpasswd ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT ArchLinux ##### SUMMARY htpasswd module fails with message: `invalid version number '1.7.0.post20161124160753` Looks like it's related to `python2-passlib` package (installed from archlinux repository). ##### STEPS TO REPRODUCE Using a role with a task like below ``` htpasswd: path=/etc/app/auth/htpasswd name=someuser crypt_scheme=bcrypt password={{ password }} owner=root mode=0640 ``` ##### EXPECTED RESULTS User entry added to htpasswd file. ##### ACTUAL RESULTS Task failure. ``` fatal: [host]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""backup"": null, ""content"": null, ""create"": true, ""crypt_scheme"": ""bcrypt"", ""delimiter"": null, ""directory_mode"": null, ""follow"": false, ""force"": null, ""group"": null, ""mode"": ""0640"", ""name"": ""someuser"", ""owner"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""path"": ""/etc/app/auth/htpasswd"", ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": ""present"", ""unsafe_writes"": null }, ""module_name"": ""htpasswd"" }, ""msg"": ""invalid version number '1.7.0.post20161124160753'"" } ``` ",True,"Failure while using htpasswd module - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME htpasswd ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT ArchLinux ##### SUMMARY htpasswd module fails with message: `invalid version number '1.7.0.post20161124160753` Looks like it's related to `python2-passlib` package (installed from archlinux repository). ##### STEPS TO REPRODUCE Using a role with a task like below ``` htpasswd: path=/etc/app/auth/htpasswd name=someuser crypt_scheme=bcrypt password={{ password }} owner=root mode=0640 ``` ##### EXPECTED RESULTS User entry added to htpasswd file. ##### ACTUAL RESULTS Task failure. ``` fatal: [host]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""backup"": null, ""content"": null, ""create"": true, ""crypt_scheme"": ""bcrypt"", ""delimiter"": null, ""directory_mode"": null, ""follow"": false, ""force"": null, ""group"": null, ""mode"": ""0640"", ""name"": ""someuser"", ""owner"": ""root"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""path"": ""/etc/app/auth/htpasswd"", ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": ""present"", ""unsafe_writes"": null }, ""module_name"": ""htpasswd"" }, ""msg"": ""invalid version number '1.7.0.post20161124160753'"" } ``` ",1,failure while using htpasswd module issue type bug report component name htpasswd ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default os environment archlinux summary htpasswd module fails with message invalid version number looks like it s related to passlib package installed from archlinux repository steps to reproduce using a role with a task like below htpasswd path etc app auth htpasswd name someuser crypt scheme bcrypt password password owner root mode expected results user entry added to htpasswd file actual results task failure fatal failed changed false failed true invocation module args backup null content null create true crypt scheme bcrypt delimiter null directory mode null follow false force null group null mode name someuser owner root password value specified in no log parameter path etc app auth htpasswd regexp null remote src null selevel null serole null setype null seuser null src null state present unsafe writes null module name htpasswd msg invalid version number ,1 994,4758682855.0,IssuesEvent,2016-10-24 20:13:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,get_url prevents caching of data,affects_2.0 bug_report feature_idea waiting_on_maintainer,"Issue Type: Bug Report Component Name: get_url module Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Ansible Configuration: Stock. Environment: Ubuntu 14.04 Summary: get_url seems to add headers that prevent efficient use of a caching proxy. Steps To Reproduce: I am trying to download some ISOs to multiple machines (via a proxy to conserve bandwidth). The ISO is being stored by the proxy, and the machine is using the proxy, but it is downloading from the upstream source every time. Squid is showing in its logs: 1454462392.008 532579 192.168.122.10 TCP_CLIENT_REFRESH_MISS/200 632291702 GET http://mirrors.kernel.org/centos/7.2.1511/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso - HIER_DIRECT/198.145.20.143 application/octet-stream According to the squid docs: TCP_CLIENT_REFRESH_MISS The client issued a ""no-cache"" pragma, or some analogous cache control command along with the request. Thus, the cache has to refetch the object. Using a standard wget (or, say, yum to retrieve packages) does not cause CLIENT_REFRESH_MISSes. Is there something in the get_url code that is causing the sending of a no cache pragma? Or maybe it's not turning off some default option in the underlying urllib (or whatever it uses under the hood)? Expected Results: File will be downloaded much faster, and from the cache. Actual Results: Squid is actually downloading the file again.",True,"get_url prevents caching of data - Issue Type: Bug Report Component Name: get_url module Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Ansible Configuration: Stock. Environment: Ubuntu 14.04 Summary: get_url seems to add headers that prevent efficient use of a caching proxy. Steps To Reproduce: I am trying to download some ISOs to multiple machines (via a proxy to conserve bandwidth). The ISO is being stored by the proxy, and the machine is using the proxy, but it is downloading from the upstream source every time. Squid is showing in its logs: 1454462392.008 532579 192.168.122.10 TCP_CLIENT_REFRESH_MISS/200 632291702 GET http://mirrors.kernel.org/centos/7.2.1511/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso - HIER_DIRECT/198.145.20.143 application/octet-stream According to the squid docs: TCP_CLIENT_REFRESH_MISS The client issued a ""no-cache"" pragma, or some analogous cache control command along with the request. Thus, the cache has to refetch the object. Using a standard wget (or, say, yum to retrieve packages) does not cause CLIENT_REFRESH_MISSes. Is there something in the get_url code that is causing the sending of a no cache pragma? Or maybe it's not turning off some default option in the underlying urllib (or whatever it uses under the hood)? Expected Results: File will be downloaded much faster, and from the cache. Actual Results: Squid is actually downloading the file again.",1,get url prevents caching of data issue type bug report component name get url module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides ansible configuration stock environment ubuntu summary get url seems to add headers that prevent efficient use of a caching proxy steps to reproduce i am trying to download some isos to multiple machines via a proxy to conserve bandwidth the iso is being stored by the proxy and the machine is using the proxy but it is downloading from the upstream source every time squid is showing in its logs tcp client refresh miss get hier direct application octet stream according to the squid docs tcp client refresh miss the client issued a no cache pragma or some analogous cache control command along with the request thus the cache has to refetch the object using a standard wget or say yum to retrieve packages does not cause client refresh misses is there something in the get url code that is causing the sending of a no cache pragma or maybe it s not turning off some default option in the underlying urllib or whatever it uses under the hood expected results file will be downloaded much faster and from the cache actual results squid is actually downloading the file again ,1 1770,6575049292.0,IssuesEvent,2017-09-11 14:53:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add timeout option to gce module,affects_2.3 cloud feature_idea gce waiting_on_maintainer,"##### ISSUE TYPE: Feature Idea ##### COMPONENT NAME: `cloud/gce.py` ##### SUMMARY: Add possibility to override default libcloud timeout and get rid of such errors: ``` 14:50:51 TASK [Create GCE instance] ***************************************************** 14:50:51 task path: /var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/playbooks/loadtesting-update-gce-image.yml:11 14:50:52 ESTABLISH LOCAL CONNECTION FOR USER: jenkins 14:50:52 EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `"" && echo ansible-tmp-1475841052.07-172163251633504=""` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `"" ) && sleep 0' 14:50:52 PUT /tmp/tmpSINDmh TO /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce 14:50:52 EXEC /bin/sh -c 'chmod u+x /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/ /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce && sleep 0' 14:50:52 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce; rm -rf ""/var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/"" > /dev/null 2>&1 && sleep 0' 14:54:01 An exception occurred during task execution. The full traceback is: 14:54:01 Traceback (most recent call last): 14:54:01 File ""/tmp/ansible_wfsdcR/ansible_module_gce.py"", line 640, in 14:54:01 main() 14:54:01 File ""/tmp/ansible_wfsdcR/ansible_module_gce.py"", line 602, in main 14:54:01 module, gce, inames) 14:54:01 File ""/tmp/ansible_wfsdcR/ansible_module_gce.py"", line 433, in create_instances 14:54:01 pd = gce.create_volume(None, ""%s"" % name, image=lc_image()) 14:54:01 File ""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py"", line 3571, in create_volume 14:54:01 data=volume_data, params=params) 14:54:01 File ""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py"", line 1007, in async_request 14:54:01 (self.timeout)) 14:54:01 libcloud.common.types.LibcloudError: 14:54:01 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""gce""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_wfsdcR/ansible_module_gce.py\"", line 640, in \n main()\n File \""/tmp/ansible_wfsdcR/ansible_module_gce.py\"", line 602, in main\n module, gce, inames)\n File \""/tmp/ansible_wfsdcR/ansible_module_gce.py\"", line 433, in create_instances\n pd = gce.create_volume(None, \""%s\"" % name, image=lc_image())\n File \""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py\"", line 3571, in create_volume\n data=volume_data, params=params)\n File \""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py\"", line 1007, in async_request\n (self.timeout))\nlibcloud.common.types.LibcloudError: \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",True,"Add timeout option to gce module - ##### ISSUE TYPE: Feature Idea ##### COMPONENT NAME: `cloud/gce.py` ##### SUMMARY: Add possibility to override default libcloud timeout and get rid of such errors: ``` 14:50:51 TASK [Create GCE instance] ***************************************************** 14:50:51 task path: /var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/playbooks/loadtesting-update-gce-image.yml:11 14:50:52 ESTABLISH LOCAL CONNECTION FOR USER: jenkins 14:50:52 EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `"" && echo ansible-tmp-1475841052.07-172163251633504=""` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `"" ) && sleep 0' 14:50:52 PUT /tmp/tmpSINDmh TO /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce 14:50:52 EXEC /bin/sh -c 'chmod u+x /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/ /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce && sleep 0' 14:50:52 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce; rm -rf ""/var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/"" > /dev/null 2>&1 && sleep 0' 14:54:01 An exception occurred during task execution. The full traceback is: 14:54:01 Traceback (most recent call last): 14:54:01 File ""/tmp/ansible_wfsdcR/ansible_module_gce.py"", line 640, in 14:54:01 main() 14:54:01 File ""/tmp/ansible_wfsdcR/ansible_module_gce.py"", line 602, in main 14:54:01 module, gce, inames) 14:54:01 File ""/tmp/ansible_wfsdcR/ansible_module_gce.py"", line 433, in create_instances 14:54:01 pd = gce.create_volume(None, ""%s"" % name, image=lc_image()) 14:54:01 File ""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py"", line 3571, in create_volume 14:54:01 data=volume_data, params=params) 14:54:01 File ""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py"", line 1007, in async_request 14:54:01 (self.timeout)) 14:54:01 libcloud.common.types.LibcloudError: 14:54:01 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""gce""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_wfsdcR/ansible_module_gce.py\"", line 640, in \n main()\n File \""/tmp/ansible_wfsdcR/ansible_module_gce.py\"", line 602, in main\n module, gce, inames)\n File \""/tmp/ansible_wfsdcR/ansible_module_gce.py\"", line 433, in create_instances\n pd = gce.create_volume(None, \""%s\"" % name, image=lc_image())\n File \""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py\"", line 3571, in create_volume\n data=volume_data, params=params)\n File \""/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py\"", line 1007, in async_request\n (self.timeout))\nlibcloud.common.types.LibcloudError: \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",1,add timeout option to gce module issue type feature idea component name cloud gce py summary add possibility to override default libcloud timeout and get rid of such errors task task path var lib jenkins jobs loadtesting cloud mysql build workspace ansible playbooks loadtesting update gce image yml establish local connection for user jenkins exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpsindmh to var lib jenkins ansible tmp ansible tmp gce exec bin sh c chmod u x var lib jenkins ansible tmp ansible tmp var lib jenkins ansible tmp ansible tmp gce sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf python var lib jenkins ansible tmp ansible tmp gce rm rf var lib jenkins ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible wfsdcr ansible module gce py line in main file tmp ansible wfsdcr ansible module gce py line in main module gce inames file tmp ansible wfsdcr ansible module gce py line in create instances pd gce create volume none s name image lc image file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud compute drivers gce py line in create volume data volume data params params file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud common base py line in async request self timeout libcloud common types libclouderror fatal failed changed false failed true invocation module name gce module stderr traceback most recent call last n file tmp ansible wfsdcr ansible module gce py line in n main n file tmp ansible wfsdcr ansible module gce py line in main n module gce inames n file tmp ansible wfsdcr ansible module gce py line in create instances n pd gce create volume none s name image lc image n file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud compute drivers gce py line in create volume n data volume data params params n file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud common base py line in async request n self timeout nlibcloud common types libclouderror n module stdout msg module failure ,1 1885,6577521839.0,IssuesEvent,2017-09-12 01:29:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vol unable to create volumes of specific size from snapshots,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: Bug Report ##### Plugin name ec2_vol ##### Ansible Version: 2.1.0 from git (cd51ba7965325fd5e7857e4cf2c3725b81b39352) ##### Ansible Configuration: Default ##### Environment: Using Ubuntu 15.10 in AWS but this is not affected by OS. ##### Summary: Some items copied from another user that created [this](https://github.com/ansible/ansible/issues/14007) issue which got closed since it was in the wrong repo but it's pretty much identical: The changes made to lines 442/443 in this pull request (https://github.com/ansible/ansible-modules-core/pull/1747/files) break the logic, making it so that the user can no longer create a volume of a specific size out of a snapshot. Lines from pull request: ORIGINAL: `if volume_size and id:` NEW BROKEN: `if volume_size and (id or snapshot):` ##### Steps To Reproduce: Test Playbook: ``` - name: Create volume and attach to this instance register: new_volume ec2_vol: state: present region: XXXXX instance: XXXXX volume_size: 30 snapshot: snap-XXXX device_name: /dev/sdX ``` Comment out the volume_size and you get a volume the size of the snapshot (1GB) instead of the 30 GB but with that flag there, you get errors about wrong parameters. ##### Expected results: Successfully creates the 30 GB volume out of the 1GB snapshot ##### Actual Results: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Cannot specify volume_size together with id or snapshot""} ``` ",True,"ec2_vol unable to create volumes of specific size from snapshots - ##### Issue Type: Bug Report ##### Plugin name ec2_vol ##### Ansible Version: 2.1.0 from git (cd51ba7965325fd5e7857e4cf2c3725b81b39352) ##### Ansible Configuration: Default ##### Environment: Using Ubuntu 15.10 in AWS but this is not affected by OS. ##### Summary: Some items copied from another user that created [this](https://github.com/ansible/ansible/issues/14007) issue which got closed since it was in the wrong repo but it's pretty much identical: The changes made to lines 442/443 in this pull request (https://github.com/ansible/ansible-modules-core/pull/1747/files) break the logic, making it so that the user can no longer create a volume of a specific size out of a snapshot. Lines from pull request: ORIGINAL: `if volume_size and id:` NEW BROKEN: `if volume_size and (id or snapshot):` ##### Steps To Reproduce: Test Playbook: ``` - name: Create volume and attach to this instance register: new_volume ec2_vol: state: present region: XXXXX instance: XXXXX volume_size: 30 snapshot: snap-XXXX device_name: /dev/sdX ``` Comment out the volume_size and you get a volume the size of the snapshot (1GB) instead of the 30 GB but with that flag there, you get errors about wrong parameters. ##### Expected results: Successfully creates the 30 GB volume out of the 1GB snapshot ##### Actual Results: ``` fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Cannot specify volume_size together with id or snapshot""} ``` ",1, vol unable to create volumes of specific size from snapshots issue type bug report plugin name vol ansible version from git ansible configuration default environment using ubuntu in aws but this is not affected by os summary some items copied from another user that created issue which got closed since it was in the wrong repo but it s pretty much identical the changes made to lines in this pull request break the logic making it so that the user can no longer create a volume of a specific size out of a snapshot lines from pull request original if volume size and id new broken if volume size and id or snapshot steps to reproduce test playbook name create volume and attach to this instance register new volume vol state present region xxxxx instance xxxxx volume size snapshot snap xxxx device name dev sdx comment out the volume size and you get a volume the size of the snapshot instead of the gb but with that flag there you get errors about wrong parameters expected results successfully creates the gb volume out of the snapshot actual results fatal failed changed false failed true msg cannot specify volume size together with id or snapshot ,1 1852,6577396441.0,IssuesEvent,2017-09-12 00:37:22,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,The ios_config module does not delete the username from the router,affects_2.0 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME networking/ios_config ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.0.1.0 ``` ##### CONFIGURATION No configuration. ##### OS / ENVIRONMENT Working on Ubuntu, release trusty. The issue is not platform dependent. ##### SUMMARY When using command ""no username"" with ios_config module, the task timeouts. ##### STEPS TO REPRODUCE I am using the following task configuration in the playbook. ``` - name: Delete users ios_config: host: ""{{ ansible_ssh_host }}"" username: ""{{ username }}"" password: ""{{ password }}"" lines: - no username admin ``` The result of the task is: ``` TASK [Delete users] ************************************************************ fatal: [hub]: FAILED! => {""changed"": false, ""commands"": [""configure terminal"", ""no username admin""], ""failed"": true, ""msg"": ""timeout trying to send command""} ``` The reason for this is that Cisco ask for confirmation when deleting the username from the configuration. ``` R2(config)#no username admin This operation will remove all username related configurations with same name.Do you want to continue? [confirm] R2(config)# ``` ##### EXPECTED RESULTS It is expected that command should be deleted from the configuration. ##### ACTUAL RESULTS ``` fatal: [wan1]: FAILED! => {""changed"": false, ""commands"": [""configure terminal"", ""no username admin""], ""failed"": true, ""invocation"": {""module_args"": {""after"": null, ""auth_pass"": null, ""authorize"": false, ""before"": null, ""config"": null, ""force"": false, ""host"": ""192.168.35.152"", ""lines"": [""no username admin""], ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""provider"": null, ""replace"": ""line"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER""}, ""module_name"": ""ios_config""}, ""msg"": ""timeout trying to send command""} ``` ",True,"The ios_config module does not delete the username from the router - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME networking/ios_config ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.0.1.0 ``` ##### CONFIGURATION No configuration. ##### OS / ENVIRONMENT Working on Ubuntu, release trusty. The issue is not platform dependent. ##### SUMMARY When using command ""no username"" with ios_config module, the task timeouts. ##### STEPS TO REPRODUCE I am using the following task configuration in the playbook. ``` - name: Delete users ios_config: host: ""{{ ansible_ssh_host }}"" username: ""{{ username }}"" password: ""{{ password }}"" lines: - no username admin ``` The result of the task is: ``` TASK [Delete users] ************************************************************ fatal: [hub]: FAILED! => {""changed"": false, ""commands"": [""configure terminal"", ""no username admin""], ""failed"": true, ""msg"": ""timeout trying to send command""} ``` The reason for this is that Cisco ask for confirmation when deleting the username from the configuration. ``` R2(config)#no username admin This operation will remove all username related configurations with same name.Do you want to continue? [confirm] R2(config)# ``` ##### EXPECTED RESULTS It is expected that command should be deleted from the configuration. ##### ACTUAL RESULTS ``` fatal: [wan1]: FAILED! => {""changed"": false, ""commands"": [""configure terminal"", ""no username admin""], ""failed"": true, ""invocation"": {""module_args"": {""after"": null, ""auth_pass"": null, ""authorize"": false, ""before"": null, ""config"": null, ""force"": false, ""host"": ""192.168.35.152"", ""lines"": [""no username admin""], ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": 22, ""provider"": null, ""replace"": ""line"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER""}, ""module_name"": ""ios_config""}, ""msg"": ""timeout trying to send command""} ``` ",1,the ios config module does not delete the username from the router issue type bug report component name networking ios config ansible version ansible version ansible configuration no configuration os environment working on ubuntu release trusty the issue is not platform dependent summary when using command no username with ios config module the task timeouts steps to reproduce i am using the following task configuration in the playbook name delete users ios config host ansible ssh host username username password password lines no username admin the result of the task is task fatal failed changed false commands failed true msg timeout trying to send command the reason for this is that cisco ask for confirmation when deleting the username from the configuration config no username admin this operation will remove all username related configurations with same name do you want to continue config expected results it is expected that command should be deleted from the configuration actual results fatal failed changed false commands failed true invocation module args after null auth pass null authorize false before null config null force false host lines match line parents null password value specified in no log parameter port provider null replace line username value specified in no log parameter module name ios config msg timeout trying to send command ,1 1257,5332016878.0,IssuesEvent,2017-02-15 20:59:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_config - binascii.Error: Incorrect padding,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 17c0f52c96) last updated 2016/09/28 16:38:32 (GMT +000) lib/ansible/modules/core: (detached HEAD 0f505378c3) last updated 2016/09/23 13:51:09 (GMT +000) lib/ansible/modules/extras: (detached HEAD 1ade801f65) last updated 2016/09/23 13:51:25 (GMT +000) config file = {path_clipped}/ansible.cfg configured module search path = ['{path_clipped}/ansible/library'] ``` Note - I know that I'm using month old version of Ansible but I can confirm that I have had this issue a month back itself. So, unless this issue was fixed in last one month, I guess updating might not help. ##### CONFIGURATION Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ``` [defaults] log_path={log_file_path} host_key_checking = False ``` ##### OS / ENVIRONMENT Running from RHEL 6.8 with remote devices on Cisco IOS. ##### SUMMARY When the ios_config module is used to take backup of running config of remote device to local server from which Ansible playbook is running then few devices give incorrect padding error while rest work fine. The same incorrect padding error goes away when same playbook is triggered again. I Googled around and found this to be problem with known_hosts file so I added host_key_checking false in ansible.cfg but still the issue persists. I can tell you that even after added host_key_checking false I see known_hosts file getting updated when playbook is executed on new devices. Note - I have seen cases where a playbook when run for XYZ device may be success. 2nd executing of same playbook on same device then resulted in incorrect padding error which means that even though known_hosts was updated during first successful run still during second run something triggered the padding error. This kind of proves that issue is not because of known_hosts file. Lastly, I only see this issue when I run my playbook for multiple devices. This is when few fail with incorrect padding error and rest succeed. 2nd trigger marks the failed devices as success as well. ##### STEPS TO REPRODUCE Problem reproduction has been explained in detailed above. One execution of below playbook may result in incorrect padding error for few out of many devices while next executing marks those devices success. ``` Below role is called from site.yml --- - name: Backing up running config ios_config: timeout: 60 backup: yes authorize: yes provider: ""{{ }}"" ``` ##### EXPECTED RESULTS Here is the success output of the same playbook for a device XYZ (name changed): ``` {date/ID/PID and other detailed removed} | ok: [XYZ] => { ""backup_path"": ""{path clipped}"", ""changed"": false, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": true, ""backup"": true, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""XYZ"", ""lines"": null, ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""authorize"": true, ""host"": ""XYZ"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""{name clipped}"", ""username"": ""{name clipped}"" }, ""replace"": ""line"", ""save"": false, ""src"": null, ""ssh_keyfile"": null, ""timeout"": 60, ""transport"": ""{name clipped}"", ""use_ssl"": true, ""username"": ""{name clipped}"", ""validate_certs"": true } }, ""warnings"": [] } ``` ##### ACTUAL RESULTS Here is output with -vvvvv ``` {details like pid etc clipped} | An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_5EnQ71/ansible_module_ios_config.py"", line 363, in main() File ""/tmp/ansible_5EnQ71/ansible_module_ios_config.py"", line 350, in main result['__backup__'] = module.config.get_config() File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py"", line 125, in config File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py"", line 147, in connect File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py"", line 228, in connect File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py"", line 82, in open File ""{path clipped}ansible/lib/paramiko/client.py"", line 173, in load_host_keys self._host_keys.load(filename) File ""{path clipped}ansible/lib/paramiko/hostkeys.py"", line 155, in load e = HostKeyEntry.from_line(line) File ""{path clipped}/ansible/lib/paramiko/hostkeys.py"", line 67, in from_line key = RSAKey(data=base64.decodestring(key)) File ""/usr/lib64/python2.6/base64.py"", line 321, in decodestring return binascii.a2b_base64(s) binascii.Error: Incorrect padding {details like pid user etc clipped} | fatal: [XYZ]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""authorize"": true, ""backup"": true, ""provider"": { ""authorize"": true, ""host"": ""XYZ"", ""password"": "" "", ""transport"": ""{details clipped}"", ""username"": ""{details clipped}"" }, ""timeout"": 60 }, ""module_name"": ""ios_config"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_5EnQ71/ansible_module_ios_config.py\"", line 363, in \n main()\n File \""/tmp/ansible_5EnQ71/ansible_module_ios_config.py\"", line 350, in main\n result['__backup__'] = module.config.get_config()\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py\"", line 125, in config\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py\"", line 147, in connect\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 228, in connect\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 82, in open\n File \""{details clipped}ansible/lib/paramiko/client.py\"", line 173, in load_host_keys\n self._host_keys.load(filename)\n File \""{details clipped}ansible/lib/paramiko/hostkeys.py\"", line 155, in load\n e = HostKeyEntry.from_line(line)\n File \""{details clipped}/ansible/lib/paramiko/hostkeys.py\"", line 67, in from_line\n key = RSAKey(data=base64.decodestring(key))\n File \""/usr/lib64/python2.6/base64.py\"", line 321, in decodestring\n return binascii.a2b_base64(s)\nbinascii.Error: Incorrect padding\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ",True,"ios_config - binascii.Error: Incorrect padding - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 17c0f52c96) last updated 2016/09/28 16:38:32 (GMT +000) lib/ansible/modules/core: (detached HEAD 0f505378c3) last updated 2016/09/23 13:51:09 (GMT +000) lib/ansible/modules/extras: (detached HEAD 1ade801f65) last updated 2016/09/23 13:51:25 (GMT +000) config file = {path_clipped}/ansible.cfg configured module search path = ['{path_clipped}/ansible/library'] ``` Note - I know that I'm using month old version of Ansible but I can confirm that I have had this issue a month back itself. So, unless this issue was fixed in last one month, I guess updating might not help. ##### CONFIGURATION Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ``` [defaults] log_path={log_file_path} host_key_checking = False ``` ##### OS / ENVIRONMENT Running from RHEL 6.8 with remote devices on Cisco IOS. ##### SUMMARY When the ios_config module is used to take backup of running config of remote device to local server from which Ansible playbook is running then few devices give incorrect padding error while rest work fine. The same incorrect padding error goes away when same playbook is triggered again. I Googled around and found this to be problem with known_hosts file so I added host_key_checking false in ansible.cfg but still the issue persists. I can tell you that even after added host_key_checking false I see known_hosts file getting updated when playbook is executed on new devices. Note - I have seen cases where a playbook when run for XYZ device may be success. 2nd executing of same playbook on same device then resulted in incorrect padding error which means that even though known_hosts was updated during first successful run still during second run something triggered the padding error. This kind of proves that issue is not because of known_hosts file. Lastly, I only see this issue when I run my playbook for multiple devices. This is when few fail with incorrect padding error and rest succeed. 2nd trigger marks the failed devices as success as well. ##### STEPS TO REPRODUCE Problem reproduction has been explained in detailed above. One execution of below playbook may result in incorrect padding error for few out of many devices while next executing marks those devices success. ``` Below role is called from site.yml --- - name: Backing up running config ios_config: timeout: 60 backup: yes authorize: yes provider: ""{{ }}"" ``` ##### EXPECTED RESULTS Here is the success output of the same playbook for a device XYZ (name changed): ``` {date/ID/PID and other detailed removed} | ok: [XYZ] => { ""backup_path"": ""{path clipped}"", ""changed"": false, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": true, ""backup"": true, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""XYZ"", ""lines"": null, ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""authorize"": true, ""host"": ""XYZ"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""{name clipped}"", ""username"": ""{name clipped}"" }, ""replace"": ""line"", ""save"": false, ""src"": null, ""ssh_keyfile"": null, ""timeout"": 60, ""transport"": ""{name clipped}"", ""use_ssl"": true, ""username"": ""{name clipped}"", ""validate_certs"": true } }, ""warnings"": [] } ``` ##### ACTUAL RESULTS Here is output with -vvvvv ``` {details like pid etc clipped} | An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_5EnQ71/ansible_module_ios_config.py"", line 363, in main() File ""/tmp/ansible_5EnQ71/ansible_module_ios_config.py"", line 350, in main result['__backup__'] = module.config.get_config() File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py"", line 125, in config File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py"", line 147, in connect File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/ios.py"", line 180, in connect File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py"", line 228, in connect File ""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py"", line 82, in open File ""{path clipped}ansible/lib/paramiko/client.py"", line 173, in load_host_keys self._host_keys.load(filename) File ""{path clipped}ansible/lib/paramiko/hostkeys.py"", line 155, in load e = HostKeyEntry.from_line(line) File ""{path clipped}/ansible/lib/paramiko/hostkeys.py"", line 67, in from_line key = RSAKey(data=base64.decodestring(key)) File ""/usr/lib64/python2.6/base64.py"", line 321, in decodestring return binascii.a2b_base64(s) binascii.Error: Incorrect padding {details like pid user etc clipped} | fatal: [XYZ]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""authorize"": true, ""backup"": true, ""provider"": { ""authorize"": true, ""host"": ""XYZ"", ""password"": "" "", ""transport"": ""{details clipped}"", ""username"": ""{details clipped}"" }, ""timeout"": 60 }, ""module_name"": ""ios_config"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_5EnQ71/ansible_module_ios_config.py\"", line 363, in \n main()\n File \""/tmp/ansible_5EnQ71/ansible_module_ios_config.py\"", line 350, in main\n result['__backup__'] = module.config.get_config()\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py\"", line 125, in config\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/network.py\"", line 147, in connect\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/ios.py\"", line 180, in connect\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 228, in connect\n File \""/tmp/ansible_5EnQ71/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 82, in open\n File \""{details clipped}ansible/lib/paramiko/client.py\"", line 173, in load_host_keys\n self._host_keys.load(filename)\n File \""{details clipped}ansible/lib/paramiko/hostkeys.py\"", line 155, in load\n e = HostKeyEntry.from_line(line)\n File \""{details clipped}/ansible/lib/paramiko/hostkeys.py\"", line 67, in from_line\n key = RSAKey(data=base64.decodestring(key))\n File \""/usr/lib64/python2.6/base64.py\"", line 321, in decodestring\n return binascii.a2b_base64(s)\nbinascii.Error: Incorrect padding\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ",1,ios config binascii error incorrect padding issue type bug report component name ios config ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file path clipped ansible cfg configured module search path note i know that i m using month old version of ansible but i can confirm that i have had this issue a month back itself so unless this issue was fixed in last one month i guess updating might not help configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables log path log file path host key checking false os environment running from rhel with remote devices on cisco ios summary when the ios config module is used to take backup of running config of remote device to local server from which ansible playbook is running then few devices give incorrect padding error while rest work fine the same incorrect padding error goes away when same playbook is triggered again i googled around and found this to be problem with known hosts file so i added host key checking false in ansible cfg but still the issue persists i can tell you that even after added host key checking false i see known hosts file getting updated when playbook is executed on new devices note i have seen cases where a playbook when run for xyz device may be success executing of same playbook on same device then resulted in incorrect padding error which means that even though known hosts was updated during first successful run still during second run something triggered the padding error this kind of proves that issue is not because of known hosts file lastly i only see this issue when i run my playbook for multiple devices this is when few fail with incorrect padding error and rest succeed trigger marks the failed devices as success as well steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used problem reproduction has been explained in detailed above one execution of below playbook may result in incorrect padding error for few out of many devices while next executing marks those devices success below role is called from site yml name backing up running config ios config timeout backup yes authorize yes provider expected results here is the success output of the same playbook for a device xyz name changed date id pid and other detailed removed ok backup path path clipped changed false invocation module args after null auth pass null authorize true backup true before null config null defaults false force false host xyz lines null match line parents null password value specified in no log parameter port null provider authorize true host xyz password value specified in no log parameter transport name clipped username name clipped replace line save false src null ssh keyfile null timeout transport name clipped use ssl true username name clipped validate certs true warnings actual results here is output with vvvvv details like pid etc clipped an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios config py line in main file tmp ansible ansible module ios config py line in main result module config get config file tmp ansible ansible modlib zip ansible module utils network py line in config file tmp ansible ansible modlib zip ansible module utils network py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file path clipped ansible lib paramiko client py line in load host keys self host keys load filename file path clipped ansible lib paramiko hostkeys py line in load e hostkeyentry from line line file path clipped ansible lib paramiko hostkeys py line in from line key rsakey data decodestring key file usr py line in decodestring return binascii s binascii error incorrect padding details like pid user etc clipped fatal failed changed false failed true invocation module args authorize true backup true provider authorize true host xyz password transport details clipped username details clipped timeout module name ios config module stderr traceback most recent call last n file tmp ansible ansible module ios config py line in n main n file tmp ansible ansible module ios config py line in main n result module config get config n file tmp ansible ansible modlib zip ansible module utils network py line in config n file tmp ansible ansible modlib zip ansible module utils network py line in connect n file tmp ansible ansible modlib zip ansible module utils ios py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in open n file details clipped ansible lib paramiko client py line in load host keys n self host keys load filename n file details clipped ansible lib paramiko hostkeys py line in load n e hostkeyentry from line line n file details clipped ansible lib paramiko hostkeys py line in from line n key rsakey data decodestring key n file usr py line in decodestring n return binascii s nbinascii error incorrect padding n module stdout msg module failure ,1 989,4756401056.0,IssuesEvent,2016-10-24 13:55:06,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"""Unsupported parameter for module: type"" although it is defined in the documentation and is necessary",affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_snmp_host ##### ANSIBLE VERSION ``` ansible 2.3.0~git20161010.03765ba config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **NX-OSv 7.3(0)D1(1)** ##### SUMMARY cf. title. Using ```type: traps``` or ```type: trap``` leads to the same issue. ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [all:vars] nms_mgt_ip_address=172.21.100.1 [spines] NX_OSv_Spine_11 ansible_host=172.21.100.11 NX_OSv_Spine_12 ansible_host=172.21.100.12 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/nx_osv/connections.yml and a symbolic link in roles/nxos_snmp/defaults points to nx_osv ``` connections ... nxapi: transport: nxapi host: ""{{ ansible_host }}"" # ansible_port port: ""{{ http.port }}"" # ansible_user username: admin password: xxxxxxxx # enable_secret_password auth_pass: xxxxxxxx # http or https use_ssl: no validate_certs: ""{{ https.validate_certs }}"" ``` **Role**: nxos_snmp: ``` - include_vars: ""../defaults/{{ os_family }}/connections.yml"" ... - name: Configuring the allowed NMS for that community in new SNMPv2c nxos_snmp_host: provider: ""{{ connections.nxapi }}"" community: whatever snmp_host: ""{{ nms_mgt_ip_address }}"" type: traps udp: 162 version: v2c state: present register: result ``` **Playbook**: ``` - name: Configuring SNMP on NX-OS/NX-OSv hosts: - nx_osv roles: - nxos_snmp ``` ##### EXPECTED RESULTS The type parameter - traps or informs - is necessary.. The [documentation](http://docs.ansible.com/ansible/nxos_snmp_host_module.html) has forgotten the trailing s. ##### ACTUAL RESULTS ``` TASK [nxos_snmp : Configuring the allowed NMS for that community in new SNMPv2c] *** fatal: [NX_OSv_Spine_12]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""unsupported parameter for module: type""} ``` No issue when configuring through the CLI: ``` NX_OSv_Spine_11(config)# snmp-server host 172.21.100.1 traps version 2c whatever udp-port 162 NX_OSv_Spine_11(config)# ```",True,"""Unsupported parameter for module: type"" although it is defined in the documentation and is necessary - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_snmp_host ##### ANSIBLE VERSION ``` ansible 2.3.0~git20161010.03765ba config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **NX-OSv 7.3(0)D1(1)** ##### SUMMARY cf. title. Using ```type: traps``` or ```type: trap``` leads to the same issue. ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [all:vars] nms_mgt_ip_address=172.21.100.1 [spines] NX_OSv_Spine_11 ansible_host=172.21.100.11 NX_OSv_Spine_12 ansible_host=172.21.100.12 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/nx_osv/connections.yml and a symbolic link in roles/nxos_snmp/defaults points to nx_osv ``` connections ... nxapi: transport: nxapi host: ""{{ ansible_host }}"" # ansible_port port: ""{{ http.port }}"" # ansible_user username: admin password: xxxxxxxx # enable_secret_password auth_pass: xxxxxxxx # http or https use_ssl: no validate_certs: ""{{ https.validate_certs }}"" ``` **Role**: nxos_snmp: ``` - include_vars: ""../defaults/{{ os_family }}/connections.yml"" ... - name: Configuring the allowed NMS for that community in new SNMPv2c nxos_snmp_host: provider: ""{{ connections.nxapi }}"" community: whatever snmp_host: ""{{ nms_mgt_ip_address }}"" type: traps udp: 162 version: v2c state: present register: result ``` **Playbook**: ``` - name: Configuring SNMP on NX-OS/NX-OSv hosts: - nx_osv roles: - nxos_snmp ``` ##### EXPECTED RESULTS The type parameter - traps or informs - is necessary.. The [documentation](http://docs.ansible.com/ansible/nxos_snmp_host_module.html) has forgotten the trailing s. ##### ACTUAL RESULTS ``` TASK [nxos_snmp : Configuring the allowed NMS for that community in new SNMPv2c] *** fatal: [NX_OSv_Spine_12]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""unsupported parameter for module: type""} ``` No issue when configuring through the CLI: ``` NX_OSv_Spine_11(config)# snmp-server host 172.21.100.1 traps version 2c whatever udp-port 162 NX_OSv_Spine_11(config)# ```",1, unsupported parameter for module type although it is defined in the documentation and is necessary issue type bug report component name nxos snmp host ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible roles roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment host ubuntu target nx osv summary cf title using type traps or type trap leads to the same issue steps to reproduce inventory hosts nms mgt ip address nx osv spine ansible host nx osv spine ansible host structure passed as provider connections ssh defined in group vars nx osv connections yml and a symbolic link in roles nxos snmp defaults points to nx osv connections nxapi transport nxapi host ansible host ansible port port http port ansible user username admin password xxxxxxxx enable secret password auth pass xxxxxxxx http or https use ssl no validate certs https validate certs role nxos snmp include vars defaults os family connections yml name configuring the allowed nms for that community in new nxos snmp host provider connections nxapi community whatever snmp host nms mgt ip address type traps udp version state present register result playbook name configuring snmp on nx os nx osv hosts nx osv roles nxos snmp expected results the type parameter traps or informs is necessary the has forgotten the trailing s actual results task fatal failed changed false failed true msg unsupported parameter for module type no issue when configuring through the cli nx osv spine config snmp server host traps version whatever udp port nx osv spine config ,1 962,4704770270.0,IssuesEvent,2016-10-13 12:40:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Users of ios_template should be directed to ios_config not eos_config,affects_2.3 docs_report networking waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_template ##### ANSIBLE VERSION N/A ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY At it says ""Deprecated in 2.2. Use eos_config instead"" It should say ""Use ios_config instead"" ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",True,"Users of ios_template should be directed to ios_config not eos_config - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_template ##### ANSIBLE VERSION N/A ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY At it says ""Deprecated in 2.2. Use eos_config instead"" It should say ""Use ios_config instead"" ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",1,users of ios template should be directed to ios config not eos config issue type documentation report component name ios template ansible version n a configuration n a os environment n a summary at it says deprecated in use eos config instead it should say use ios config instead steps to reproduce expected results actual results ,1 1703,6574397594.0,IssuesEvent,2017-09-11 12:44:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,EC2 Instance termination fails with SSH error,affects_1.9 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME EC2 module ##### ANSIBLE VERSION ``` $ ansible --version ansible 1.9.3 configured module search path = None ``` ##### CONFIGURATION This job is running from Ansible Tower ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY I have a playbook that launches an instance runs some tasks and later tries to terminate the instance. Getting inconsistent behaviour on termination, sometimes it suceeds other times it fail with an ssh connection error. ##### STEPS TO REPRODUCE Run a playbook like the following ``` --- - name: Create Datomic Instance hosts: localhost vars_files: - group_vars/datomic tasks: - name: ""ansible: Create Instance"" ec2: count: 1 assign_public_ip: no wait: yes state: ""present"" key_name: ""ansible-2016-06-03"" region: ""us-east-1"" image: ""ami-f25fcfe5"" instance_type: ""m4.large"" tenancy: ""default"" group: ""{{ security_group }}"" vpc_subnet_id: ""{{ subnet_id }}"" instance_profile_name: ""datomic"" instance_tags: Name: ""datomic"" register: ec2 - name: ""ansible: Add to Host Group"" add_host: hostname: ""{{ item.private_ip }}"" groupname: launched with_items: ""{{ ec2.instances }}"" - name: ""ansible: Wait for SSH port to come up"" wait_for: host: ""{{ item.private_ip }}"" port: 22 delay: 60 timeout: 320 state: started with_items: ""{{ ec2.instances }}"" - pause: seconds=30 - name: Terminate Datomic Instance hosts: localhost connection: local tasks: - name: ""post: Terminate Datomic Instance"" local_action: module: ec2 state: ""absent"" instance_ids: ""{{ ec2.instance_ids }}"" region: ""us-east-1"" retries: 3 delay: 1 ``` ##### EXPECTED RESULTS Successful termination! ##### ACTUAL RESULTS Throws a python error about SSH connection termination ``` TASK: [post: Terminate Datomic Instance] ********************************* changed: [localhost -> 127.0.0.1] PLAY RECAP ******************************************************************** Traceback (most recent call last): File ""/usr/bin/ansible-playbook"", line 324, in sys.exit(main(sys.argv[1:])) File ""/usr/bin/ansible-playbook"", line 268, in main playbook_cb.on_stats(pb.stats) File ""/usr/lib/pymodules/python2.7/ansible/callbacks.py"", line 724, in on_stats call_callback_module('playbook_on_stats', stats) File ""/usr/lib/pymodules/python2.7/ansible/callbacks.py"", line 179, in call_callback_module method(*args, **kwargs) File ""/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py"", line 447, in playbook_on_stats self.terminate_ssh_control_masters() File ""/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py"", line 279, in terminate_ssh_control_masters proc.terminate() File ""/usr/lib/python2.7/dist-packages/psutil/__init__.py"", line 904, in terminate self.send_signal(signal.SIGTERM) File ""/usr/lib/python2.7/dist-packages/psutil/__init__.py"", line 173, in wrapper raise NoSuchProcess(self.pid, self._platform_impl._process_name) psutil._error.NoSuchProcess: process no longer exists (pid=1024, name='ssh') ``` ",True,"EC2 Instance termination fails with SSH error - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME EC2 module ##### ANSIBLE VERSION ``` $ ansible --version ansible 1.9.3 configured module search path = None ``` ##### CONFIGURATION This job is running from Ansible Tower ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY I have a playbook that launches an instance runs some tasks and later tries to terminate the instance. Getting inconsistent behaviour on termination, sometimes it suceeds other times it fail with an ssh connection error. ##### STEPS TO REPRODUCE Run a playbook like the following ``` --- - name: Create Datomic Instance hosts: localhost vars_files: - group_vars/datomic tasks: - name: ""ansible: Create Instance"" ec2: count: 1 assign_public_ip: no wait: yes state: ""present"" key_name: ""ansible-2016-06-03"" region: ""us-east-1"" image: ""ami-f25fcfe5"" instance_type: ""m4.large"" tenancy: ""default"" group: ""{{ security_group }}"" vpc_subnet_id: ""{{ subnet_id }}"" instance_profile_name: ""datomic"" instance_tags: Name: ""datomic"" register: ec2 - name: ""ansible: Add to Host Group"" add_host: hostname: ""{{ item.private_ip }}"" groupname: launched with_items: ""{{ ec2.instances }}"" - name: ""ansible: Wait for SSH port to come up"" wait_for: host: ""{{ item.private_ip }}"" port: 22 delay: 60 timeout: 320 state: started with_items: ""{{ ec2.instances }}"" - pause: seconds=30 - name: Terminate Datomic Instance hosts: localhost connection: local tasks: - name: ""post: Terminate Datomic Instance"" local_action: module: ec2 state: ""absent"" instance_ids: ""{{ ec2.instance_ids }}"" region: ""us-east-1"" retries: 3 delay: 1 ``` ##### EXPECTED RESULTS Successful termination! ##### ACTUAL RESULTS Throws a python error about SSH connection termination ``` TASK: [post: Terminate Datomic Instance] ********************************* changed: [localhost -> 127.0.0.1] PLAY RECAP ******************************************************************** Traceback (most recent call last): File ""/usr/bin/ansible-playbook"", line 324, in sys.exit(main(sys.argv[1:])) File ""/usr/bin/ansible-playbook"", line 268, in main playbook_cb.on_stats(pb.stats) File ""/usr/lib/pymodules/python2.7/ansible/callbacks.py"", line 724, in on_stats call_callback_module('playbook_on_stats', stats) File ""/usr/lib/pymodules/python2.7/ansible/callbacks.py"", line 179, in call_callback_module method(*args, **kwargs) File ""/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py"", line 447, in playbook_on_stats self.terminate_ssh_control_masters() File ""/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py"", line 279, in terminate_ssh_control_masters proc.terminate() File ""/usr/lib/python2.7/dist-packages/psutil/__init__.py"", line 904, in terminate self.send_signal(signal.SIGTERM) File ""/usr/lib/python2.7/dist-packages/psutil/__init__.py"", line 173, in wrapper raise NoSuchProcess(self.pid, self._platform_impl._process_name) psutil._error.NoSuchProcess: process no longer exists (pid=1024, name='ssh') ``` ",1, instance termination fails with ssh error issue type bug report component name module ansible version ansible version ansible configured module search path none configuration this job is running from ansible tower os environment ubuntu summary i have a playbook that launches an instance runs some tasks and later tries to terminate the instance getting inconsistent behaviour on termination sometimes it suceeds other times it fail with an ssh connection error steps to reproduce run a playbook like the following name create datomic instance hosts localhost vars files group vars datomic tasks name ansible create instance count assign public ip no wait yes state present key name ansible region us east image ami instance type large tenancy default group security group vpc subnet id subnet id instance profile name datomic instance tags name datomic register name ansible add to host group add host hostname item private ip groupname launched with items instances name ansible wait for ssh port to come up wait for host item private ip port delay timeout state started with items instances pause seconds name terminate datomic instance hosts localhost connection local tasks name post terminate datomic instance local action module state absent instance ids instance ids region us east retries delay expected results successful termination actual results throws a python error about ssh connection termination task changed play recap traceback most recent call last file usr bin ansible playbook line in sys exit main sys argv file usr bin ansible playbook line in main playbook cb on stats pb stats file usr lib pymodules ansible callbacks py line in on stats call callback module playbook on stats stats file usr lib pymodules ansible callbacks py line in call callback module method args kwargs file usr lib dist packages awx plugins callback job event callback py line in playbook on stats self terminate ssh control masters file usr lib dist packages awx plugins callback job event callback py line in terminate ssh control masters proc terminate file usr lib dist packages psutil init py line in terminate self send signal signal sigterm file usr lib dist packages psutil init py line in wrapper raise nosuchprocess self pid self platform impl process name psutil error nosuchprocess process no longer exists pid name ssh ,1 879,4541609012.0,IssuesEvent,2016-09-09 18:22:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,OpenStack os_server module defines auto_ip and floating_ip_pools as mutually exclusive,affects_2.0 bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME /cloud/openstack ##### ANSIBLE VERSION ``` ansible 2.0.0.2 config file = /Users/sebastian/helpers/ansible/ansible-projects.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY The os_server module doesn't allow auto_ip and floating_ip_pools to be defined together. This prevents me to programmatically decide whether to assign an floatingip or not. ##### STEPS TO REPRODUCE I setup multiple servers, some of which should receive a floatingip, some don't. So i define a var floating_ip_pool per server. If it is not set, i want to disable floatingip assignment via auto_ip: no and floating_ip_pools: no. ``` - name: Generate instances for all defined servers os_server: name: ""{{ item }}"" image: ""{{ os_image }}"" floating_ip_pools: ""{{ item.floating_ip_pool | default(no) }}"" auto_ip: ""{% if item.floating_ip_pool is defined %}yes{% else %}no{% endif %}"" with_items: ""{{groups.all}}"" ``` ##### EXPECTED RESULTS I want all servers with auto_ip: no not have a floatingip, no matter what is defined in floating_ip_pools. ##### ACTUAL RESULTS Ansible won't allow me to define both parameters at once, so there is no way to handle this use case programatically. ``` TASK [os-server : Generate instances for all defined servers] ******** failed: [localhost] => (item=my-server0) => {""failed"": true, ""item"": ""my-server0"", ""msg"": ""parameters are mutually exclusive: ['auto_ip', 'floating_ip_pools']""} ``` ",True,"OpenStack os_server module defines auto_ip and floating_ip_pools as mutually exclusive - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME /cloud/openstack ##### ANSIBLE VERSION ``` ansible 2.0.0.2 config file = /Users/sebastian/helpers/ansible/ansible-projects.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY The os_server module doesn't allow auto_ip and floating_ip_pools to be defined together. This prevents me to programmatically decide whether to assign an floatingip or not. ##### STEPS TO REPRODUCE I setup multiple servers, some of which should receive a floatingip, some don't. So i define a var floating_ip_pool per server. If it is not set, i want to disable floatingip assignment via auto_ip: no and floating_ip_pools: no. ``` - name: Generate instances for all defined servers os_server: name: ""{{ item }}"" image: ""{{ os_image }}"" floating_ip_pools: ""{{ item.floating_ip_pool | default(no) }}"" auto_ip: ""{% if item.floating_ip_pool is defined %}yes{% else %}no{% endif %}"" with_items: ""{{groups.all}}"" ``` ##### EXPECTED RESULTS I want all servers with auto_ip: no not have a floatingip, no matter what is defined in floating_ip_pools. ##### ACTUAL RESULTS Ansible won't allow me to define both parameters at once, so there is no way to handle this use case programatically. ``` TASK [os-server : Generate instances for all defined servers] ******** failed: [localhost] => (item=my-server0) => {""failed"": true, ""item"": ""my-server0"", ""msg"": ""parameters are mutually exclusive: ['auto_ip', 'floating_ip_pools']""} ``` ",1,openstack os server module defines auto ip and floating ip pools as mutually exclusive issue type bug report component name cloud openstack ansible version ansible config file users sebastian helpers ansible ansible projects cfg configured module search path default w o overrides configuration n a os environment n a summary the os server module doesn t allow auto ip and floating ip pools to be defined together this prevents me to programmatically decide whether to assign an floatingip or not steps to reproduce i setup multiple servers some of which should receive a floatingip some don t so i define a var floating ip pool per server if it is not set i want to disable floatingip assignment via auto ip no and floating ip pools no name generate instances for all defined servers os server name item image os image floating ip pools item floating ip pool default no auto ip if item floating ip pool is defined yes else no endif with items groups all expected results i want all servers with auto ip no not have a floatingip no matter what is defined in floating ip pools actual results ansible won t allow me to define both parameters at once so there is no way to handle this use case programatically task failed item my failed true item my msg parameters are mutually exclusive ,1 1898,6577549525.0,IssuesEvent,2017-09-12 01:41:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"while running a copy module task, ansible requested this info passed to the list.",affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: ##### Ansible Version: ``` 23:06 $ ansible --version ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: Dynamic inventory from dopy ``` [ssh_connection] scp_if_ssh=True ssh_args= -o ForwardAgent=yes ``` ##### Environment: OSX 10.10 host Ubuntu 14.04 remote ##### Summary: While running playbook, ansible reported back ``` [WARNING]: Calculating checksum failed unusually, please report this to the list so it can be fixed command: rc=flag; [ -r /etc/apt/apt.conf.d/10periodic ] || rc=2; [ -f /etc/apt/apt.conf.d/10periodic ] || rc=1; [ -d /etc/apt/apt.conf.d/10periodic ] && rc=3; python -V 2>/dev/null || rc=4; [ x""$rc"" != ""xflag"" ] && echo ""${rc} ""/etc/apt/apt.conf.d/10periodic && exit 0; (python -c 'import hashlib; BLOCKSIZE = 65536; hasher = hashlib.sha1(); afile = open(""'/etc/apt/apt.conf.d/10periodic'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (python -c 'import sha; BLOCKSIZE = 65536; hasher = sha.sha(); afile = open(""'/etc/apt/apt.conf.d/10periodic'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (echo '0 '/etc/apt/apt.conf.d/10periodic) ---- output: {'stdout_lines': [], 'stdout': u'', 'stderr': u'', 'rc': 255} ---- ``` ##### Steps To Reproduce: Unable to reproduce. Reporting because ansible said to. ``` - name: Adjust APT update intervals copy: src: config/apt_periodic dest: /etc/apt/apt.conf.d/10periodic ``` ",True,"while running a copy module task, ansible requested this info passed to the list. - ##### Issue Type: - Bug Report ##### Plugin Name: ##### Ansible Version: ``` 23:06 $ ansible --version ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: Dynamic inventory from dopy ``` [ssh_connection] scp_if_ssh=True ssh_args= -o ForwardAgent=yes ``` ##### Environment: OSX 10.10 host Ubuntu 14.04 remote ##### Summary: While running playbook, ansible reported back ``` [WARNING]: Calculating checksum failed unusually, please report this to the list so it can be fixed command: rc=flag; [ -r /etc/apt/apt.conf.d/10periodic ] || rc=2; [ -f /etc/apt/apt.conf.d/10periodic ] || rc=1; [ -d /etc/apt/apt.conf.d/10periodic ] && rc=3; python -V 2>/dev/null || rc=4; [ x""$rc"" != ""xflag"" ] && echo ""${rc} ""/etc/apt/apt.conf.d/10periodic && exit 0; (python -c 'import hashlib; BLOCKSIZE = 65536; hasher = hashlib.sha1(); afile = open(""'/etc/apt/apt.conf.d/10periodic'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (python -c 'import sha; BLOCKSIZE = 65536; hasher = sha.sha(); afile = open(""'/etc/apt/apt.conf.d/10periodic'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (echo '0 '/etc/apt/apt.conf.d/10periodic) ---- output: {'stdout_lines': [], 'stdout': u'', 'stderr': u'', 'rc': 255} ---- ``` ##### Steps To Reproduce: Unable to reproduce. Reporting because ansible said to. ``` - name: Adjust APT update intervals copy: src: config/apt_periodic dest: /etc/apt/apt.conf.d/10periodic ``` ",1,while running a copy module task ansible requested this info passed to the list issue type bug report plugin name ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides ansible configuration dynamic inventory from dopy scp if ssh true ssh args o forwardagent yes environment osx host ubuntu remote summary while running playbook ansible reported back calculating checksum failed unusually please report this to the list so it can be fixed command rc flag r etc apt apt conf d rc rc rc python v dev null rc echo rc etc apt apt conf d exit python c import hashlib blocksize hasher hashlib afile open etc apt apt conf d rb buf afile read blocksize while len buf hasher update buf buf afile read blocksize afile close print hasher hexdigest dev null python c import sha blocksize hasher sha sha afile open etc apt apt conf d rb buf afile read blocksize while len buf hasher update buf buf afile read blocksize afile close print hasher hexdigest dev null echo etc apt apt conf d output stdout lines stdout u stderr u rc steps to reproduce unable to reproduce reporting because ansible said to name adjust apt update intervals copy src config apt periodic dest etc apt apt conf d ,1 1902,6577555850.0,IssuesEvent,2017-09-12 01:44:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,GCE module missing local SSD option,affects_2.0 bug_report cloud feature_idea gce waiting_on_maintainer,"##### Issue Type: Bug Report ##### Plugin Name: gce ##### Ansible Version: ``` ansible 2.0.1.0 config file = /Users/vwoo/.ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: ``` [defaults] host_key_checking = False ``` ##### Environment: N/A ##### Summary: Google Compute Engine has supported [local SSD scratch disks](https://cloud.google.com/compute/docs/disks/local-ssd) for a while. These are very useful, high-performance ephemeral disks you can attach to instances _only at create time_. Libcloud [supports creating instances with local SSDs](https://github.com/apache/libcloud/blob/trunk/demos/gce_demo.py#L331) already: (courtesy @erjohnso). However, the official [ansible gce module](http://docs.ansible.com/ansible/gce_module.html) does not provide a way to attach these local disks. ##### Steps To Reproduce: Ideally, we would like to be able to say something like: ``` gce:yml instance_names: example local_ssd: - interface: nvme - interface: nvme ``` which would create an instance with two local SSDs using the NVMe interface. ",True,"GCE module missing local SSD option - ##### Issue Type: Bug Report ##### Plugin Name: gce ##### Ansible Version: ``` ansible 2.0.1.0 config file = /Users/vwoo/.ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: ``` [defaults] host_key_checking = False ``` ##### Environment: N/A ##### Summary: Google Compute Engine has supported [local SSD scratch disks](https://cloud.google.com/compute/docs/disks/local-ssd) for a while. These are very useful, high-performance ephemeral disks you can attach to instances _only at create time_. Libcloud [supports creating instances with local SSDs](https://github.com/apache/libcloud/blob/trunk/demos/gce_demo.py#L331) already: (courtesy @erjohnso). However, the official [ansible gce module](http://docs.ansible.com/ansible/gce_module.html) does not provide a way to attach these local disks. ##### Steps To Reproduce: Ideally, we would like to be able to say something like: ``` gce:yml instance_names: example local_ssd: - interface: nvme - interface: nvme ``` which would create an instance with two local SSDs using the NVMe interface. ",1,gce module missing local ssd option issue type bug report plugin name gce ansible version ansible config file users vwoo ansible cfg configured module search path default w o overrides ansible configuration host key checking false environment n a summary google compute engine has supported for a while these are very useful high performance ephemeral disks you can attach to instances only at create time libcloud already courtesy erjohnso however the official does not provide a way to attach these local disks steps to reproduce ideally we would like to be able to say something like gce yml instance names example local ssd interface nvme interface nvme which would create an instance with two local ssds using the nvme interface ,1 1820,6577329226.0,IssuesEvent,2017-09-12 00:08:56,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Compute Engine: Provide a way to specify initial boot disk type and size,affects_2.1 cloud feature_idea gce waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME gce module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /Users/lihanli/projects/gce/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ##### OS / ENVIRONMENT OS X El Capitan ##### SUMMARY `gce` module cannot specify boot disk size and type. See http://docs.ansible.com/ansible/gce_module.html The **disks** parameter a list of persistent disks to attach to the instance; a string value gives the name of the disk; alternatively, a dictionary value can define 'name' and 'mode' ('READ_ONLY' or 'READ_WRITE'). The first entry will be the boot disk (which must be READ_WRITE). It does not let you specify the size and the type. ##### STEPS TO REPRODUCE ``` ``` ",True,"Compute Engine: Provide a way to specify initial boot disk type and size - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME gce module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /Users/lihanli/projects/gce/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ##### OS / ENVIRONMENT OS X El Capitan ##### SUMMARY `gce` module cannot specify boot disk size and type. See http://docs.ansible.com/ansible/gce_module.html The **disks** parameter a list of persistent disks to attach to the instance; a string value gives the name of the disk; alternatively, a dictionary value can define 'name' and 'mode' ('READ_ONLY' or 'READ_WRITE'). The first entry will be the boot disk (which must be READ_WRITE). It does not let you specify the size and the type. ##### STEPS TO REPRODUCE ``` ``` ",1,compute engine provide a way to specify initial boot disk type and size issue type feature idea component name gce module ansible version ansible config file users lihanli projects gce ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment os x el capitan summary gce module cannot specify boot disk size and type see the disks parameter a list of persistent disks to attach to the instance a string value gives the name of the disk alternatively a dictionary value can define name and mode read only or read write the first entry will be the boot disk which must be read write it does not let you specify the size and the type steps to reproduce ,1 916,4621653846.0,IssuesEvent,2016-09-27 02:43:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"junos_command errors out with ""TypeError: Type 'str' cannot be serialized""",affects_2.1 bug_report in progress networking P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_command core module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes to configuration ##### OS / ENVIRONMENT $ uname -a Linux dev-net-01 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY I have an Ansible script where i am simply using junos_command module to get users list from Juniper switch, below is the snippet of my code. I keep getting the RuntimeWarning and TypeError: type 'str' cannot be serialized, whenever i try to run this. Moreover I have been successfully able to run commands like 'show version' using the below code itself. But just not 'show configuration system login' command. Please look into this. **Script:** name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch **Error:** TASK [/ GET USERS / Get list of all the current users on switch] *************** fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only! \n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last): \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 261, in \n main() \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 233, in main \n xmlout.append(xml_to_string(response[index])) \n File \""/tmp/ansible_lVOmPp/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized. \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ##### STEPS TO REPRODUCE Mentioned in above section ``` name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch ``` ##### EXPECTED RESULTS returns the list of users on juniper switch. no error should be expected. ##### ACTUAL RESULTS ``` TASK [/ GET USERS / Get list of all the current users on switch] *************** EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" && echo ansible-tmp-1472681123.92-107492843053729=""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" ) && sleep 0' PUT /tmp/tmpU9G6IE TO /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command; rm -rf ""/home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/"" > /dev/null 2>&1 && sleep 0' fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""junos_command""}, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!\n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last):\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 261, in \n main()\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 233, in main\n xmlout.append(xml_to_string(response[index]))\n File \""/tmp/ansible_mdpif7/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"junos_command errors out with ""TypeError: Type 'str' cannot be serialized"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_command core module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes to configuration ##### OS / ENVIRONMENT $ uname -a Linux dev-net-01 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY I have an Ansible script where i am simply using junos_command module to get users list from Juniper switch, below is the snippet of my code. I keep getting the RuntimeWarning and TypeError: type 'str' cannot be serialized, whenever i try to run this. Moreover I have been successfully able to run commands like 'show version' using the below code itself. But just not 'show configuration system login' command. Please look into this. **Script:** name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch **Error:** TASK [/ GET USERS / Get list of all the current users on switch] *************** fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only! \n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last): \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 261, in \n main() \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 233, in main \n xmlout.append(xml_to_string(response[index])) \n File \""/tmp/ansible_lVOmPp/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized. \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ##### STEPS TO REPRODUCE Mentioned in above section ``` name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch ``` ##### EXPECTED RESULTS returns the list of users on juniper switch. no error should be expected. ##### ACTUAL RESULTS ``` TASK [/ GET USERS / Get list of all the current users on switch] *************** EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" && echo ansible-tmp-1472681123.92-107492843053729=""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" ) && sleep 0' PUT /tmp/tmpU9G6IE TO /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command; rm -rf ""/home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/"" > /dev/null 2>&1 && sleep 0' fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""junos_command""}, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!\n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last):\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 261, in \n main()\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 233, in main\n xmlout.append(xml_to_string(response[index]))\n File \""/tmp/ansible_mdpif7/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,junos command errors out with typeerror type str cannot be serialized issue type bug report component name junos command core module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes to configuration os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific uname a linux dev net generic ubuntu smp wed jul utc gnu linux summary i have an ansible script where i am simply using junos command module to get users list from juniper switch below is the snippet of my code i keep getting the runtimewarning and typeerror type str cannot be serialized whenever i try to run this moreover i have been successfully able to run commands like show version using the below code itself but just not show configuration system login command please look into this script name get users get list of all the current users on switch action junos command args commands show configuration system login provider netconf register curr users on switch error task fatal failed changed false failed true module stderr home mbhadoria local lib site packages jnpr junos device py runtimewarning cli command is for debug use only n warnings warn cli command is for debug use only runtimewarning ntraceback most recent call last n file tmp ansible lvompp ansible module junos command py line in n main n file tmp ansible lvompp ansible module junos command py line in main n xmlout append xml to string response n file tmp ansible lvompp ansible modlib zip ansible module utils junos py line in xml to string n file src lxml lxml etree pyx line in lxml etree tostring src lxml lxml etree c ntypeerror type str cannot be serialized n module stdout msg module failure parsed false steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used mentioned in above section name get users get list of all the current users on switch action junos command args commands show configuration system login provider netconf register curr users on switch expected results returns the list of users on juniper switch no error should be expected actual results task exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home mbhadoria ansible tmp ansible tmp junos command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home mbhadoria ansible tmp ansible tmp junos command rm rf home mbhadoria ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name junos command module stderr home mbhadoria local lib site packages jnpr junos device py runtimewarning cli command is for debug use only n warnings warn cli command is for debug use only runtimewarning ntraceback most recent call last n file tmp ansible ansible module junos command py line in n main n file tmp ansible ansible module junos command py line in main n xmlout append xml to string response n file tmp ansible ansible modlib zip ansible module utils junos py line in xml to string n file src lxml lxml etree pyx line in lxml etree tostring src lxml lxml etree c ntypeerror type str cannot be serialized n module stdout msg module failure parsed false ,1 1199,5133072787.0,IssuesEvent,2017-01-11 01:40:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,s3 module is not idempotent for bucket creation (fails when bucket already exists),affects_2.0 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME s3 ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY s3 bucket creation fails when bucket already exists ##### STEPS TO REPRODUCE 1. create bucket with s3 module 2. attempt to create same bucket with s3 module ``` - name: create s3 buckets s3: bucket: $BUCKET_NAME mode: create become: false delegate_to: localhost ``` ##### EXPECTED RESULTS Bucket should create the first time. The second time bucket should remain and not throw an error. ##### ACTUAL RESULTS Bucket creation failed on 2nd run. ``` 20:47:39 An exception occurred during task execution. The full traceback is: 20:47:39 Traceback (most recent call last): 20:47:39 File ""$PATH//ansible-tmp-1476132487.47-226397347317836/s3"", line 2846, in 20:47:39 main() 20:47:39 File ""$PATH/ansible-tmp-1476132487.47-226397347317836/s3"", line 610, in main 20:47:39 module.exit_json(msg=""Bucket created successfully"", changed=create_bucket(module, s3, bucket, location)) 20:47:39 File ""$PATH//ansible-tmp-1476132487.47-226397347317836/s3"", line 244, in create_bucket 20:47:39 bucket = s3.create_bucket(bucket, location=location) 20:47:39 File ""/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py"", line 616, in create_bucket 20:47:39 response.status, response.reason, body) 20:47:39 boto.exception.S3CreateError: S3CreateError: 409 Conflict 20:47:39 20:47:39 BucketAlreadyExistsThe requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.blend5FB9B826861513DD$HOSTID ``` ",True,"s3 module is not idempotent for bucket creation (fails when bucket already exists) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME s3 ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY s3 bucket creation fails when bucket already exists ##### STEPS TO REPRODUCE 1. create bucket with s3 module 2. attempt to create same bucket with s3 module ``` - name: create s3 buckets s3: bucket: $BUCKET_NAME mode: create become: false delegate_to: localhost ``` ##### EXPECTED RESULTS Bucket should create the first time. The second time bucket should remain and not throw an error. ##### ACTUAL RESULTS Bucket creation failed on 2nd run. ``` 20:47:39 An exception occurred during task execution. The full traceback is: 20:47:39 Traceback (most recent call last): 20:47:39 File ""$PATH//ansible-tmp-1476132487.47-226397347317836/s3"", line 2846, in 20:47:39 main() 20:47:39 File ""$PATH/ansible-tmp-1476132487.47-226397347317836/s3"", line 610, in main 20:47:39 module.exit_json(msg=""Bucket created successfully"", changed=create_bucket(module, s3, bucket, location)) 20:47:39 File ""$PATH//ansible-tmp-1476132487.47-226397347317836/s3"", line 244, in create_bucket 20:47:39 bucket = s3.create_bucket(bucket, location=location) 20:47:39 File ""/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py"", line 616, in create_bucket 20:47:39 response.status, response.reason, body) 20:47:39 boto.exception.S3CreateError: S3CreateError: 409 Conflict 20:47:39 20:47:39 BucketAlreadyExistsThe requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.blend5FB9B826861513DD$HOSTID ``` ",1, module is not idempotent for bucket creation fails when bucket already exists issue type bug report component name ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary bucket creation fails when bucket already exists steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create bucket with module attempt to create same bucket with module name create buckets bucket bucket name mode create become false delegate to localhost expected results bucket should create the first time the second time bucket should remain and not throw an error actual results bucket creation failed on run an exception occurred during task execution the full traceback is traceback most recent call last file path ansible tmp line in main file path ansible tmp line in main module exit json msg bucket created successfully changed create bucket module bucket location file path ansible tmp line in create bucket bucket create bucket bucket location location file usr local lib dist packages boto connection py line in create bucket response status response reason body boto exception conflict bucketalreadyexists the requested bucket name is not available the bucket namespace is shared by all users of the system please select a different name and try again blend hostid ,1 1715,6574461391.0,IssuesEvent,2017-09-11 12:58:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,junos_config broken configurations and idempotency,affects_2.2 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/centos/provisioning-metal/ansible.cfg configured module search path = ['ephemeral_roles/plugins/library'] ``` ##### CONFIGURATION Roles, plugins, inventory paths. ##### OS / ENVIRONMENT CentOS 7 managing a Juniper EX3300 stack running 12.3R8.7 ##### SUMMARY The junos_config deploys broken/inconsistent configurations every time I run it. ##### STEPS TO REPRODUCE Starting with blank port configuration: ``` logan@stack01.rack92> show configuration interfaces ae11 {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 {master:0} ``` The playbook used is the following: ``` - debug: msg: ""{{ switch_commands | default([]) }}"" when: ""{{ not switch_commands_fact | skipped }}"" - name: Apply JunOS configuration local_action: module: junos_config lines: ""{{ switch_commands | default([]) }}"" provider: ""{{ hostvars[switch_port_hostname][hostvars[switch_port_hostname]['network_provider']] }}"" when: ""{{ not switch_commands_fact | skipped }}"" ``` First run of playbook: ``` TASK [debug] ******************************************************************* ok: [lsn-mc1002] => { ""msg"": [ ""set interfaces vlan.2003 family inet address 10.3.8.1/24"", ""set vlans ANSIBLE-MANAGEMENT vlan-id 2003"", ""set vlans ANSIBLE-MANAGEMENT l3-interface vlan.2003"", ""delete interfaces ae11"", ""set interfaces ae11 description \""lsn-mc1002\"""", ""set interfaces ae11 aggregated-ether-options lacp active"", ""set interfaces ae11.0 family ethernet-switching port-mode access"", ""set interfaces ae11.0 family ethernet-switching vlan members ANSIBLE-MANAGEMENT"" ] } TASK [Apply JunOS configuration] *********************************************** changed: [lsn-mc1002 -> localhost] TASK [debug] ******************************************************************* ok: [lsn-mc1002] => { ""msg"": [ ""set interfaces vlan.2003 family inet address 10.3.8.1/24"", ""set vlans ANSIBLE-MANAGEMENT vlan-id 2003"", ""set vlans ANSIBLE-MANAGEMENT l3-interface vlan.2003"", ""delete interfaces ge-0/0/6"", ""set interfaces ge-0/0/6 description \""lsn-mc1002\"""", ""set interfaces ge-0/0/6 ether-options 802.3ad ae11"" ] } TASK [Apply JunOS configuration] *********************************************** changed: [lsn-mc1002 -> localhost] TASK [debug] ******************************************************************* ok: [lsn-mc1002] => { ""msg"": [ ""set interfaces vlan.2003 family inet address 10.3.8.1/24"", ""set vlans ANSIBLE-MANAGEMENT vlan-id 2003"", ""set vlans ANSIBLE-MANAGEMENT l3-interface vlan.2003"", ""delete interfaces ge-1/0/6"", ""set interfaces ge-1/0/6 description \""lsn-mc1002\"""", ""set interfaces ge-1/0/6 ether-options 802.3ad ae11"" ] } TASK [Apply JunOS configuration] *********************************************** changed: [lsn-mc1002 -> localhost] ``` Resulting configuration from first run looks good: ``` logan@stack01.rack92> show configuration interfaces ae11 description lsn-mc1002; aggregated-ether-options { lacp { active; } } unit 0 { family ethernet-switching { port-mode access; vlan { members ANSIBLE-MANAGEMENT; } } } {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} ``` Second run. Ansible output is identical so I'll refrain from pasting it again. ``` logan@stack01.rack92> show configuration interfaces ae11 description lsn-mc1002; unit 0 { family ethernet-switching { port-mode access; vlan { members ANSIBLE-MANAGEMENT; } } } {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 description lsn-mc1002; {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 description lsn-mc1002; {master:0} ``` 3rd run: ``` logan@stack01.rack92> show configuration interfaces ae11 description lsn-mc1002; aggregated-ether-options { lacp { active; } } unit 0 { family ethernet-switching { port-mode access; vlan { members ANSIBLE-MANAGEMENT; } } } {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} ``` Subsequent runs oscillate back and forth between the broken and proper configurations.",True,"junos_config broken configurations and idempotency - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/centos/provisioning-metal/ansible.cfg configured module search path = ['ephemeral_roles/plugins/library'] ``` ##### CONFIGURATION Roles, plugins, inventory paths. ##### OS / ENVIRONMENT CentOS 7 managing a Juniper EX3300 stack running 12.3R8.7 ##### SUMMARY The junos_config deploys broken/inconsistent configurations every time I run it. ##### STEPS TO REPRODUCE Starting with blank port configuration: ``` logan@stack01.rack92> show configuration interfaces ae11 {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 {master:0} ``` The playbook used is the following: ``` - debug: msg: ""{{ switch_commands | default([]) }}"" when: ""{{ not switch_commands_fact | skipped }}"" - name: Apply JunOS configuration local_action: module: junos_config lines: ""{{ switch_commands | default([]) }}"" provider: ""{{ hostvars[switch_port_hostname][hostvars[switch_port_hostname]['network_provider']] }}"" when: ""{{ not switch_commands_fact | skipped }}"" ``` First run of playbook: ``` TASK [debug] ******************************************************************* ok: [lsn-mc1002] => { ""msg"": [ ""set interfaces vlan.2003 family inet address 10.3.8.1/24"", ""set vlans ANSIBLE-MANAGEMENT vlan-id 2003"", ""set vlans ANSIBLE-MANAGEMENT l3-interface vlan.2003"", ""delete interfaces ae11"", ""set interfaces ae11 description \""lsn-mc1002\"""", ""set interfaces ae11 aggregated-ether-options lacp active"", ""set interfaces ae11.0 family ethernet-switching port-mode access"", ""set interfaces ae11.0 family ethernet-switching vlan members ANSIBLE-MANAGEMENT"" ] } TASK [Apply JunOS configuration] *********************************************** changed: [lsn-mc1002 -> localhost] TASK [debug] ******************************************************************* ok: [lsn-mc1002] => { ""msg"": [ ""set interfaces vlan.2003 family inet address 10.3.8.1/24"", ""set vlans ANSIBLE-MANAGEMENT vlan-id 2003"", ""set vlans ANSIBLE-MANAGEMENT l3-interface vlan.2003"", ""delete interfaces ge-0/0/6"", ""set interfaces ge-0/0/6 description \""lsn-mc1002\"""", ""set interfaces ge-0/0/6 ether-options 802.3ad ae11"" ] } TASK [Apply JunOS configuration] *********************************************** changed: [lsn-mc1002 -> localhost] TASK [debug] ******************************************************************* ok: [lsn-mc1002] => { ""msg"": [ ""set interfaces vlan.2003 family inet address 10.3.8.1/24"", ""set vlans ANSIBLE-MANAGEMENT vlan-id 2003"", ""set vlans ANSIBLE-MANAGEMENT l3-interface vlan.2003"", ""delete interfaces ge-1/0/6"", ""set interfaces ge-1/0/6 description \""lsn-mc1002\"""", ""set interfaces ge-1/0/6 ether-options 802.3ad ae11"" ] } TASK [Apply JunOS configuration] *********************************************** changed: [lsn-mc1002 -> localhost] ``` Resulting configuration from first run looks good: ``` logan@stack01.rack92> show configuration interfaces ae11 description lsn-mc1002; aggregated-ether-options { lacp { active; } } unit 0 { family ethernet-switching { port-mode access; vlan { members ANSIBLE-MANAGEMENT; } } } {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} ``` Second run. Ansible output is identical so I'll refrain from pasting it again. ``` logan@stack01.rack92> show configuration interfaces ae11 description lsn-mc1002; unit 0 { family ethernet-switching { port-mode access; vlan { members ANSIBLE-MANAGEMENT; } } } {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 description lsn-mc1002; {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 description lsn-mc1002; {master:0} ``` 3rd run: ``` logan@stack01.rack92> show configuration interfaces ae11 description lsn-mc1002; aggregated-ether-options { lacp { active; } } unit 0 { family ethernet-switching { port-mode access; vlan { members ANSIBLE-MANAGEMENT; } } } {master:0} logan@stack01.rack92> show configuration interfaces ge-0/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} logan@stack01.rack92> show configuration interfaces ge-1/0/6 description lsn-mc1002; ether-options { 802.3ad ae11; } {master:0} ``` Subsequent runs oscillate back and forth between the broken and proper configurations.",1,junos config broken configurations and idempotency issue type bug report component name junos config ansible version ansible config file home centos provisioning metal ansible cfg configured module search path configuration roles plugins inventory paths os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific centos managing a juniper stack running summary the junos config deploys broken inconsistent configurations every time i run it steps to reproduce starting with blank port configuration logan show configuration interfaces master logan show configuration interfaces ge master logan show configuration interfaces ge master the playbook used is the following debug msg switch commands default when not switch commands fact skipped name apply junos configuration local action module junos config lines switch commands default provider hostvars when not switch commands fact skipped first run of playbook task ok msg set interfaces vlan family inet address set vlans ansible management vlan id set vlans ansible management interface vlan delete interfaces set interfaces description lsn set interfaces aggregated ether options lacp active set interfaces family ethernet switching port mode access set interfaces family ethernet switching vlan members ansible management task changed task ok msg set interfaces vlan family inet address set vlans ansible management vlan id set vlans ansible management interface vlan delete interfaces ge set interfaces ge description lsn set interfaces ge ether options task changed task ok msg set interfaces vlan family inet address set vlans ansible management vlan id set vlans ansible management interface vlan delete interfaces ge set interfaces ge description lsn set interfaces ge ether options task changed resulting configuration from first run looks good logan show configuration interfaces description lsn aggregated ether options lacp active unit family ethernet switching port mode access vlan members ansible management master logan show configuration interfaces ge description lsn ether options master logan show configuration interfaces ge description lsn ether options master second run ansible output is identical so i ll refrain from pasting it again logan show configuration interfaces description lsn unit family ethernet switching port mode access vlan members ansible management master logan show configuration interfaces ge description lsn master logan show configuration interfaces ge description lsn master run logan show configuration interfaces description lsn aggregated ether options lacp active unit family ethernet switching port mode access vlan members ansible management master logan show configuration interfaces ge description lsn ether options master logan show configuration interfaces ge description lsn ether options master subsequent runs oscillate back and forth between the broken and proper configurations ,1 958,4702357466.0,IssuesEvent,2016-10-13 01:43:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ios_config save: true - ""unable to load backup configuration""",affects_2.2 bug_report networking P1 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config iosxr_config ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 1c7890bf86) last updated 2016/09/26 10:42:34 (GMT +100) lib/ansible/modules/core: (devel cf243860ff) last updated 2016/09/26 10:42:39 (GMT +100) lib/ansible/modules/extras: (devel 7aab9cd93b) last updated 2016/09/26 10:42:41 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY From the code in `ios_config` looks like we've got an exception in the exception handler ``` try: run(module, result) # Triggers exception except NetworkError: load_backup(module) exc = get_exception() module.fail_json(msg=str(exc)) ... def load_backup(module): try: module.cli(['exit', 'config replace flash:/ansible-rollback force']) # triggers exception except NetworkError: module.fail_json(msg='unable to load backup configuration') ``` 1) Underlying issue needs fixing 2) Would it be worth Should we capture the exception earlier on in `main()` 3) In `load_backup` can be give the user more feedback about why the error occured? ##### STEPS TO REPRODUCE ```yaml - name: setup ios_config: commands: - no description - no shutdown parents: - interface Loopback999 match: none provider: ""{{ cli }}"" - name: save config ios_config: save: true provider: ""{{ cli }}"" register: result ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ```yaml TASK [test_ios_config : save config] ******************************************* task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/save.yaml:15 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/ios/ios_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396 `"" && echo ansible-tmp-1474897504.34-175123254234396=""` echo $HOME/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396 `"" ) && sleep 0' PUT /tmp/tmp5Z2AeZ TO /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ios_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ios_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ios_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/"" > /dev/null 2>&1 && sleep 0' fatal: [ios01]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""ios01"", ""lines"": null, ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""ios01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": true, ""src"": null, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""msg"": ""unable to load backup configuration"" ``` ",True,"ios_config save: true - ""unable to load backup configuration"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config iosxr_config ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 1c7890bf86) last updated 2016/09/26 10:42:34 (GMT +100) lib/ansible/modules/core: (devel cf243860ff) last updated 2016/09/26 10:42:39 (GMT +100) lib/ansible/modules/extras: (devel 7aab9cd93b) last updated 2016/09/26 10:42:41 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY From the code in `ios_config` looks like we've got an exception in the exception handler ``` try: run(module, result) # Triggers exception except NetworkError: load_backup(module) exc = get_exception() module.fail_json(msg=str(exc)) ... def load_backup(module): try: module.cli(['exit', 'config replace flash:/ansible-rollback force']) # triggers exception except NetworkError: module.fail_json(msg='unable to load backup configuration') ``` 1) Underlying issue needs fixing 2) Would it be worth Should we capture the exception earlier on in `main()` 3) In `load_backup` can be give the user more feedback about why the error occured? ##### STEPS TO REPRODUCE ```yaml - name: setup ios_config: commands: - no description - no shutdown parents: - interface Loopback999 match: none provider: ""{{ cli }}"" - name: save config ios_config: save: true provider: ""{{ cli }}"" register: result ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ```yaml TASK [test_ios_config : save config] ******************************************* task path: /home/johnb/git/ansible-inc/testing-ios/roles/test_ios_config/tests/cli/save.yaml:15 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/core/network/ios/ios_config.py ESTABLISH LOCAL CONNECTION FOR USER: johnb EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396 `"" && echo ansible-tmp-1474897504.34-175123254234396=""` echo $HOME/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396 `"" ) && sleep 0' PUT /tmp/tmp5Z2AeZ TO /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ios_config.py EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ios_config.py && sleep 0' EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/ios_config.py; rm -rf ""/home/johnb/.ansible/tmp/ansible-tmp-1474897504.34-175123254234396/"" > /dev/null 2>&1 && sleep 0' fatal: [ios01]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""after"": null, ""auth_pass"": null, ""authorize"": false, ""backup"": false, ""before"": null, ""config"": null, ""defaults"": false, ""force"": false, ""host"": ""ios01"", ""lines"": null, ""match"": ""line"", ""parents"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": { ""host"": ""ios01"", ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""transport"": ""cli"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"" }, ""replace"": ""line"", ""save"": true, ""src"": null, ""ssh_keyfile"": null, ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": true, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true } }, ""msg"": ""unable to load backup configuration"" ``` ",1,ios config save true unable to load backup configuration issue type bug report component name ios config iosxr config ansible version ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary from the code in ios config looks like we ve got an exception in the exception handler try run module result triggers exception except networkerror load backup module exc get exception module fail json msg str exc def load backup module try module cli triggers exception except networkerror module fail json msg unable to load backup configuration underlying issue needs fixing would it be worth should we capture the exception earlier on in main in load backup can be give the user more feedback about why the error occured steps to reproduce yaml name setup ios config commands no description no shutdown parents interface match none provider cli name save config ios config save true provider cli register result expected results actual results yaml task task path home johnb git ansible inc testing ios roles test ios config tests cli save yaml using module file home johnb git ansible inc ansible lib ansible modules core network ios ios config py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home johnb ansible tmp ansible tmp ios config py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp ios config py sleep exec bin sh c python home johnb ansible tmp ansible tmp ios config py rm rf home johnb ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args after null auth pass null authorize false backup false before null config null defaults false force false host lines null match line parents null password value specified in no log parameter port null provider host password value specified in no log parameter transport cli username value specified in no log parameter replace line save true src null ssh keyfile null timeout transport cli use ssl true username value specified in no log parameter validate certs true msg unable to load backup configuration ,1 1146,5004924100.0,IssuesEvent,2016-12-12 08:53:23,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ad-hoc shell module freeze and never return to linux prompt,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell ##### ANSIBLE VERSION ``` ansible-2.1.1.0 ``` ##### CONFIGURATION Nothing configured ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 6.8 (Santiago) ##### SUMMARY When running a remote command with the module shell in ansible, for a list of servers and with sudo, the command run fine in the remote servers but the ansible command never return to the linux prompt. Crtl+C doesn´t work for cancel. strace of the process shows an infinite loop for the ansible process. The only way to finish the run is to kill the ansible process. ##### STEPS TO REPRODUCE ``` In a Red Had 6.8 with python 2.6.6 and the ansible version 2.1.1.0 run this with a file filled with several servers: #> ansible -u -i /tmp/list all -k -s -m shell -a ""uptime"" ``` ##### EXPECTED RESULTS The command never ends. Cannot be stoped/canceled with Crtl+c and have to be killed. ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file SSH password: Loaded callback minimal of type stdout, v2.0 ESTABLISH SSH CONNECTION FOR USER: ESTABLISH SSH CONNECTION FOR USER: ESTABLISH SSH CONNECTION FOR USER: ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d20 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX2 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-231739456228208 `"" && echo ansible-tmp-1480423293.83-231739456228208=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-231739456228208 `"" ) && sleep 0'""'""'' SSH: EXEC sshpass -d21 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX3 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-86346344087207 `"" && echo ansible-tmp-1480423293.84-86346344087207=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-86346344087207 `"" ) && sleep 0'""'""'' SSH: EXEC sshpass -d19 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-223315962231458 `"" && echo ansible-tmp-1480423293.83-223315962231458=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-223315962231458 `"" ) && sleep 0'""'""'' SSH: EXEC sshpass -d18 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-137653613389320 `"" && echo ansible-tmp-1480423293.84-137653613389320=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-137653613389320 `"" ) && sleep 0'""'""'' PUT /tmp/tmpXJBHQT TO /home//.ansible/tmp/ansible-tmp-1480423293.83-231739456228208/command SSH: EXEC sshpass -d20 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX2]' PUT /tmp/tmpnRVLB6 TO /home//.ansible/tmp/ansible-tmp-1480423293.84-86346344087207/command SSH: EXEC sshpass -d21 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX3]' ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d20 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX2 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-ruzkfjtlzgbojjrcqkwsmvyrqbjlqkvq; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.83-231739456228208/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.83-231739456228208/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d21 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX3 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-imlcvpsrsyiqbutgbwqjncqupwglioyl; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.84-86346344087207/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.84-86346344087207/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' XXXXX3 | SUCCESS | rc=0 >> 13:41:33 up 38 days, 9:14, 1 user, load average: 0.00, 0.00, 0.00 PUT /tmp/tmpXJBHQT TO /home//.ansible/tmp/ansible-tmp-1480423293.84-137653613389320/command SSH: EXEC sshpass -d18 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX4]' XXXXX2 | SUCCESS | rc=0 >> 13:41:34 up 38 days, 8:11, 10 users, load average: 0.00, 0.00, 0.00 ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d18 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-hafdetflhotpilnjdzjaagamprxblawg; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.84-137653613389320/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.84-137653613389320/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' XXXXX4 | SUCCESS | rc=0 >> 13:41:33 up 26 days, 1:16, 1 user, load average: 0.00, 0.00, 0.00 PUT /tmp/tmpXJBHQT TO /home//.ansible/tmp/ansible-tmp-1480423293.83-223315962231458/command SSH: EXEC sshpass -d19 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX1]' ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d19 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX1 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-qodqnawzjywbdlugkfynvmkpczcswokg; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.83-223315962231458/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.83-223315962231458/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' XXXXX1 | SUCCESS | rc=0 >> 13:41:45 up 26 days, 1:13, 1 user, load average: 0.00, 0.00, 0.00 ``` [Debug_ansible_shell.txt](https://github.com/ansible/ansible-modules-core/files/619617/Debug_ansible_shell.txt)",True,"Ad-hoc shell module freeze and never return to linux prompt - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell ##### ANSIBLE VERSION ``` ansible-2.1.1.0 ``` ##### CONFIGURATION Nothing configured ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 6.8 (Santiago) ##### SUMMARY When running a remote command with the module shell in ansible, for a list of servers and with sudo, the command run fine in the remote servers but the ansible command never return to the linux prompt. Crtl+C doesn´t work for cancel. strace of the process shows an infinite loop for the ansible process. The only way to finish the run is to kill the ansible process. ##### STEPS TO REPRODUCE ``` In a Red Had 6.8 with python 2.6.6 and the ansible version 2.1.1.0 run this with a file filled with several servers: #> ansible -u -i /tmp/list all -k -s -m shell -a ""uptime"" ``` ##### EXPECTED RESULTS The command never ends. Cannot be stoped/canceled with Crtl+c and have to be killed. ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file SSH password: Loaded callback minimal of type stdout, v2.0 ESTABLISH SSH CONNECTION FOR USER: ESTABLISH SSH CONNECTION FOR USER: ESTABLISH SSH CONNECTION FOR USER: ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d20 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX2 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-231739456228208 `"" && echo ansible-tmp-1480423293.83-231739456228208=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-231739456228208 `"" ) && sleep 0'""'""'' SSH: EXEC sshpass -d21 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX3 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-86346344087207 `"" && echo ansible-tmp-1480423293.84-86346344087207=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-86346344087207 `"" ) && sleep 0'""'""'' SSH: EXEC sshpass -d19 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-223315962231458 `"" && echo ansible-tmp-1480423293.83-223315962231458=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.83-223315962231458 `"" ) && sleep 0'""'""'' SSH: EXEC sshpass -d18 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r XXXXX4 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-137653613389320 `"" && echo ansible-tmp-1480423293.84-137653613389320=""` echo $HOME/.ansible/tmp/ansible-tmp-1480423293.84-137653613389320 `"" ) && sleep 0'""'""'' PUT /tmp/tmpXJBHQT TO /home//.ansible/tmp/ansible-tmp-1480423293.83-231739456228208/command SSH: EXEC sshpass -d20 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX2]' PUT /tmp/tmpnRVLB6 TO /home//.ansible/tmp/ansible-tmp-1480423293.84-86346344087207/command SSH: EXEC sshpass -d21 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX3]' ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d20 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX2 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-ruzkfjtlzgbojjrcqkwsmvyrqbjlqkvq; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.83-231739456228208/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.83-231739456228208/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d21 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX3 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-imlcvpsrsyiqbutgbwqjncqupwglioyl; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.84-86346344087207/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.84-86346344087207/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' XXXXX3 | SUCCESS | rc=0 >> 13:41:33 up 38 days, 9:14, 1 user, load average: 0.00, 0.00, 0.00 PUT /tmp/tmpXJBHQT TO /home//.ansible/tmp/ansible-tmp-1480423293.84-137653613389320/command SSH: EXEC sshpass -d18 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX4]' XXXXX2 | SUCCESS | rc=0 >> 13:41:34 up 38 days, 8:11, 10 users, load average: 0.00, 0.00, 0.00 ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d18 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX4 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-hafdetflhotpilnjdzjaagamprxblawg; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.84-137653613389320/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.84-137653613389320/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' XXXXX4 | SUCCESS | rc=0 >> 13:41:33 up 26 days, 1:16, 1 user, load average: 0.00, 0.00, 0.00 PUT /tmp/tmpXJBHQT TO /home//.ansible/tmp/ansible-tmp-1480423293.83-223315962231458/command SSH: EXEC sshpass -d19 sftp -o BatchMode=no -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[XXXXX1]' ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC sshpass -d19 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User= -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt XXXXX1 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-qodqnawzjywbdlugkfynvmkpczcswokg; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1480423293.83-223315962231458/command; rm -rf ""/home//.ansible/tmp/ansible-tmp-1480423293.83-223315962231458/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' XXXXX1 | SUCCESS | rc=0 >> 13:41:45 up 26 days, 1:13, 1 user, load average: 0.00, 0.00, 0.00 ``` [Debug_ansible_shell.txt](https://github.com/ansible/ansible-modules-core/files/619617/Debug_ansible_shell.txt)",1,ad hoc shell module freeze and never return to linux prompt issue type bug report component name shell ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables nothing configured os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific red hat enterprise linux server release santiago summary when running a remote command with the module shell in ansible for a list of servers and with sudo the command run fine in the remote servers but the ansible command never return to the linux prompt crtl c doesn´t work for cancel strace of the process shows an infinite loop for the ansible process the only way to finish the run is to kill the ansible process steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used in a red had with python and the ansible version run this with a file filled with several servers ansible u i tmp list all k s m shell a uptime expected results the command never ends cannot be stoped canceled with crtl c and have to be killed actual results using etc ansible ansible cfg as config file ssh password loaded callback minimal of type stdout establish ssh connection for user establish ssh connection for user establish ssh connection for user establish ssh connection for user ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpxjbhqt to home ansible tmp ansible tmp command ssh exec sshpass sftp o batchmode no b c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r put tmp to home ansible tmp ansible tmp command ssh exec sshpass sftp o batchmode no b c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r establish ssh connection for user ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success ruzkfjtlzgbojjrcqkwsmvyrqbjlqkvq lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp command rm rf home ansible tmp ansible tmp dev null sleep establish ssh connection for user ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success imlcvpsrsyiqbutgbwqjncqupwglioyl lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp command rm rf home ansible tmp ansible tmp dev null sleep success rc up days user load average put tmp tmpxjbhqt to home ansible tmp ansible tmp command ssh exec sshpass sftp o batchmode no b c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r success rc up days users load average establish ssh connection for user ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success hafdetflhotpilnjdzjaagamprxblawg lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp command rm rf home ansible tmp ansible tmp dev null sleep success rc up days user load average put tmp tmpxjbhqt to home ansible tmp ansible tmp command ssh exec sshpass sftp o batchmode no b c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r establish ssh connection for user ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user o connecttimeout o controlpath root ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success qodqnawzjywbdlugkfynvmkpczcswokg lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp command rm rf home ansible tmp ansible tmp dev null sleep success rc up days user load average ,1 1807,6575943739.0,IssuesEvent,2017-09-11 17:55:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,How do I remove references in file /etc/services ,affects_2.0 feature_idea waiting_on_maintainer," Yes Not in GitHub ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible-2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY want to uninstall Netbackup for rhel, Remove NetBackup references in the /etc/services file: # NetBackup services # bpjava-msvc 13722/tcp bpjava-msvc bpcd 13782/tcp bpcd vnetd 13724/tcp vnetd vopied 13783/tcp vopied ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"How do I remove references in file /etc/services - Yes Not in GitHub ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ##### ANSIBLE VERSION ``` ansible-2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY want to uninstall Netbackup for rhel, Remove NetBackup references in the /etc/services file: # NetBackup services # bpjava-msvc 13722/tcp bpjava-msvc bpcd 13782/tcp bpcd vnetd 13724/tcp vnetd vopied 13783/tcp vopied ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,how do i remove references in file etc services yes not in github issue type feature idea component name ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary want to uninstall netbackup for rhel remove netbackup references in the etc services file netbackup services bpjava msvc tcp bpjava msvc bpcd tcp bpcd vnetd tcp vnetd vopied tcp vopied steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results ,1 908,4577132249.0,IssuesEvent,2016-09-17 01:43:50,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Guest Customization template in vSphere_Guest module,affects_2.1 cloud feature_idea vmware waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` 2.1.1 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Does the vsphere_guest module support applying a guest customization template while deploying a VM from template? ",True,"Guest Customization template in vSphere_Guest module - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` 2.1.1 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Does the vsphere_guest module support applying a guest customization template while deploying a VM from template? ",1,guest customization template in vsphere guest module issue type feature idea component name vsphere guest ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary does the vsphere guest module support applying a guest customization template while deploying a vm from template ,1 1024,4818540308.0,IssuesEvent,2016-11-04 16:38:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Add update_password option to os_user module,affects_2.1 cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_user ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY The `os_user` module with a password specified for a user will always report 'changed'. The conclusion of the bug report in #5183 was that in order to ""fix"" this we need to add another parameter like the on in the `user` module. I.e a parameter called `update_password` that has options `on_create` or `always`. ##### STEPS TO REPRODUCE ``` - name: ""Create test user"" os_user: name: test state: present password: very-secret default_project: a-existing-project update_password: on_create ``` ##### EXPECTED RESULTS On first run, the user would be created and the password set. On the second run, given that nothing changed, the task would say `ok`. If the parameter would be `update_password: always` on the other hand, the module should always set the password and would always report `changed` ",True,"Add update_password option to os_user module - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_user ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY The `os_user` module with a password specified for a user will always report 'changed'. The conclusion of the bug report in #5183 was that in order to ""fix"" this we need to add another parameter like the on in the `user` module. I.e a parameter called `update_password` that has options `on_create` or `always`. ##### STEPS TO REPRODUCE ``` - name: ""Create test user"" os_user: name: test state: present password: very-secret default_project: a-existing-project update_password: on_create ``` ##### EXPECTED RESULTS On first run, the user would be created and the password set. On the second run, given that nothing changed, the task would say `ok`. If the parameter would be `update_password: always` on the other hand, the module should always set the password and would always report `changed` ",1,add update password option to os user module issue type feature idea component name os user ansible version ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary the os user module with a password specified for a user will always report changed the conclusion of the bug report in was that in order to fix this we need to add another parameter like the on in the user module i e a parameter called update password that has options on create or always steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create test user os user name test state present password very secret default project a existing project update password on create expected results on first run the user would be created and the password set on the second run given that nothing changed the task would say ok if the parameter would be update password always on the other hand the module should always set the password and would always report changed ,1 818,4441895799.0,IssuesEvent,2016-08-19 11:13:28,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unarchive: no support for .bz2 and .gz files (only tar.* and zip),bug_report docs_report waiting_on_maintainer,"##### Issue Type: - Documentation Report ##### Plugin Name: unarchive ##### Ansible Version: ansible 2.0.0.2 ##### Ansible Configuration: Default. No changes. ##### Environment: N/A ##### Summary: unarchive module only supports tar.* and .zip files, but the documentation does not mention it and the error message is utterly unhelpful. If you try to use the module with .bz2 or .gz file (no tar), the error message says ""Failed to find handler for xxxx. Make sure the required command to extract the file is installed."". The commands are present, it's simply the module which does not support extracting those files. Either the documentation should be amended (currently it indicates that almost any package can be extracted) or the support for more (commonly used) archivers added. ##### Steps To Reproduce: 1. Compress any file using `bzip2 -c` 2. Try using the `unarchive` module with this file. ##### Expected Results: The file would be extracted, similar to running `bzip2 -d`. ##### Actual Results: ``` fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""copy"": false, ""creates"": null, ""delimiter"": null, ""dest"": ""/tmp/SOURCES/"", ""directory_mode"": null, ""follow"": false, ""force"": null, ""group"": null, ""list_files"": false, ""mode"": null, ""original_basename"": ""xxx.gz"", ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/tmp/SOURCES/xxxl.gz""}}, ""msg"": ""Failed to find handler for \""/tmp/SOURCES/xxx.gz\"". Make sure the required command to extract the file is installed.""} ```",True,"unarchive: no support for .bz2 and .gz files (only tar.* and zip) - ##### Issue Type: - Documentation Report ##### Plugin Name: unarchive ##### Ansible Version: ansible 2.0.0.2 ##### Ansible Configuration: Default. No changes. ##### Environment: N/A ##### Summary: unarchive module only supports tar.* and .zip files, but the documentation does not mention it and the error message is utterly unhelpful. If you try to use the module with .bz2 or .gz file (no tar), the error message says ""Failed to find handler for xxxx. Make sure the required command to extract the file is installed."". The commands are present, it's simply the module which does not support extracting those files. Either the documentation should be amended (currently it indicates that almost any package can be extracted) or the support for more (commonly used) archivers added. ##### Steps To Reproduce: 1. Compress any file using `bzip2 -c` 2. Try using the `unarchive` module with this file. ##### Expected Results: The file would be extracted, similar to running `bzip2 -d`. ##### Actual Results: ``` fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""copy"": false, ""creates"": null, ""delimiter"": null, ""dest"": ""/tmp/SOURCES/"", ""directory_mode"": null, ""follow"": false, ""force"": null, ""group"": null, ""list_files"": false, ""mode"": null, ""original_basename"": ""xxx.gz"", ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/tmp/SOURCES/xxxl.gz""}}, ""msg"": ""Failed to find handler for \""/tmp/SOURCES/xxx.gz\"". Make sure the required command to extract the file is installed.""} ```",1,unarchive no support for and gz files only tar and zip issue type documentation report plugin name unarchive ansible version ansible ansible configuration default no changes environment n a summary unarchive module only supports tar and zip files but the documentation does not mention it and the error message is utterly unhelpful if you try to use the module with or gz file no tar the error message says failed to find handler for xxxx make sure the required command to extract the file is installed the commands are present it s simply the module which does not support extracting those files either the documentation should be amended currently it indicates that almost any package can be extracted or the support for more commonly used archivers added steps to reproduce compress any file using c try using the unarchive module with this file expected results the file would be extracted similar to running d actual results fatal failed changed false failed true invocation module args backup null content null copy false creates null delimiter null dest tmp sources directory mode null follow false force null group null list files false mode null original basename xxx gz owner null regexp null remote src null selevel null serole null setype null seuser null src tmp sources xxxl gz msg failed to find handler for tmp sources xxx gz make sure the required command to extract the file is installed ,1 939,4652274009.0,IssuesEvent,2016-10-03 13:31:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Fail to check package version,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ```docker_image``` module. ##### ANSIBLE VERSION ```bash $ ansible --version ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default file used. ##### OS / ENVIRONMENT Docker Container (hosted by Debian Jessie). ##### SUMMARY When I want pull a docker image, Ansible reports that docker-py package installed is ```1.10.3``` whereas minimum required is ```1.7.0```. ##### STEPS TO REPRODUCE ```bash $ ansible -m docker_image -a ""name=nginx pull=yes"" foo foo | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.3. Minimum version required is 1.7.0."" } ```",True,"Fail to check package version - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ```docker_image``` module. ##### ANSIBLE VERSION ```bash $ ansible --version ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default file used. ##### OS / ENVIRONMENT Docker Container (hosted by Debian Jessie). ##### SUMMARY When I want pull a docker image, Ansible reports that docker-py package installed is ```1.10.3``` whereas minimum required is ```1.7.0```. ##### STEPS TO REPRODUCE ```bash $ ansible -m docker_image -a ""name=nginx pull=yes"" foo foo | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.3. Minimum version required is 1.7.0."" } ```",1,fail to check package version issue type bug report component name docker image module ansible version bash ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default file used os environment docker container hosted by debian jessie summary when i want pull a docker image ansible reports that docker py package installed is whereas minimum required is steps to reproduce bash ansible m docker image a name nginx pull yes foo foo failed changed false failed true msg error docker py version is minimum version required is ,1 1084,4932105553.0,IssuesEvent,2016-11-28 12:33:03,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Exception when using nxos_snmp_community,affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_snmp_community ##### ANSIBLE VERSION ``` ansible 2.3.0~git20161010.03765ba or ansible 2.2.0.0-0.1.rc1 (ansible 2.1.2.0-1 does not include the module) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **NX-OSv 7.3(0)D1(1)** ##### SUMMARY cf. title ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [all:vars] nms_mgt_ip_address=172.21.100.1 [spines] NX_OSv_Spine_11 ansible_host=172.21.100.11 NX_OSv_Spine_12 ansible_host=172.21.100.12 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/nx_osv/connections.yml and a symbolic link in roles/nxos_snmp/defaults points to nx_osv ``` connections ... nxapi: transport: nxapi host: ""{{ ansible_host }}"" # ansible_port port: ""{{ http.port }}"" # ansible_user username: admin password: xxxxxxxx # enable_secret_password auth_pass: xxxxxxxx # http or https use_ssl: no validate_certs: ""{{ https.validate_certs }}"" ``` **Role**: nxos_snmp: ``` - include_vars: ""../defaults/{{ os_family }}/connections.yml"" ... - name: Configuring the ACL for authorized NMS in new SNMPv2c nxos_config: provider: ""{{ connections.nxapi }}"" parents: - ""ip access-list authorized-snmp-nms"" lines: - ""permit ip host {{ nms_mgt_ip_address }} host {{ ansible_host }} log"" register: result - name: Configuring RO/RW community string in new IPv4/SNMPv2c nxos_snmp_community: provider: ""{{ connections.nxapi }}"" community: whatever access: rw acl: authorized-snmp-nms state: present register: result ``` **Playbook**: ``` - name: Configuring SNMP on NX-OS/NX-OSv hosts: - nx_osv roles: - nxos_snmp ``` ##### EXPECTED RESULTS The community should be configured without an exception. ##### ACTUAL RESULTS ``` TASK [nxos_snmp : Configuring the ACL for authorized NMS in new SNMPv2c] ******* changed: [NX_OSv_Spine_12] => {""changed"": true, ""updates"": [""ip access-list authorized-snmp-nms"", ""permit ip host 172.21.100.1 host 172.21.100.12 log""], ""warnings"": []} changed: [NX_OSv_Spine_11] => {""changed"": true, ""updates"": [""ip access-list authorized-snmp-nms"", ""permit ip host 172.21.100.1 host 172.21.100.11 log""], ""warnings"": []} ... TASK [nxos_snmp : Configuring RO/RW community string in new IPv4/SNMPv2c] ****** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: string indices must be integers fatal: [NX_OSv_Spine_11]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_55m6Ao/ansible_module_nxos_snmp_community.py\"", line 499, in \n main()\n File \""/tmp/ansible_55m6Ao/ansible_module_nxos_snmp_community.py\"", line 458, in main\n existing = get_snmp_community(module, community)\n File \""/tmp/ansible_55m6Ao/ansible_module_nxos_snmp_community.py\"", line 389, in get_snmp_community\n community_table = data['TABLE_snmp_community']['ROW_snmp_community']\nTypeError: string indices must be integers\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` No issue when configuring through the CLI: ``` NX_OSv_Spine_11(config)# snmp-server community whatever rw NX_OSv_Spine_11(config)# snmp-server community whatever use-ipv4acl authorized-snmp-nms NX_OSv_Spine_11(config)# ``` ",True,"Exception when using nxos_snmp_community - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_snmp_community ##### ANSIBLE VERSION ``` ansible 2.3.0~git20161010.03765ba or ansible 2.2.0.0-0.1.rc1 (ansible 2.1.2.0-1 does not include the module) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **NX-OSv 7.3(0)D1(1)** ##### SUMMARY cf. title ##### STEPS TO REPRODUCE **Inventory ./hosts**: ``` [all:vars] nms_mgt_ip_address=172.21.100.1 [spines] NX_OSv_Spine_11 ansible_host=172.21.100.11 NX_OSv_Spine_12 ansible_host=172.21.100.12 ``` Structure passed as ""**provider**"": connections.ssh Defined in group_vars/nx_osv/connections.yml and a symbolic link in roles/nxos_snmp/defaults points to nx_osv ``` connections ... nxapi: transport: nxapi host: ""{{ ansible_host }}"" # ansible_port port: ""{{ http.port }}"" # ansible_user username: admin password: xxxxxxxx # enable_secret_password auth_pass: xxxxxxxx # http or https use_ssl: no validate_certs: ""{{ https.validate_certs }}"" ``` **Role**: nxos_snmp: ``` - include_vars: ""../defaults/{{ os_family }}/connections.yml"" ... - name: Configuring the ACL for authorized NMS in new SNMPv2c nxos_config: provider: ""{{ connections.nxapi }}"" parents: - ""ip access-list authorized-snmp-nms"" lines: - ""permit ip host {{ nms_mgt_ip_address }} host {{ ansible_host }} log"" register: result - name: Configuring RO/RW community string in new IPv4/SNMPv2c nxos_snmp_community: provider: ""{{ connections.nxapi }}"" community: whatever access: rw acl: authorized-snmp-nms state: present register: result ``` **Playbook**: ``` - name: Configuring SNMP on NX-OS/NX-OSv hosts: - nx_osv roles: - nxos_snmp ``` ##### EXPECTED RESULTS The community should be configured without an exception. ##### ACTUAL RESULTS ``` TASK [nxos_snmp : Configuring the ACL for authorized NMS in new SNMPv2c] ******* changed: [NX_OSv_Spine_12] => {""changed"": true, ""updates"": [""ip access-list authorized-snmp-nms"", ""permit ip host 172.21.100.1 host 172.21.100.12 log""], ""warnings"": []} changed: [NX_OSv_Spine_11] => {""changed"": true, ""updates"": [""ip access-list authorized-snmp-nms"", ""permit ip host 172.21.100.1 host 172.21.100.11 log""], ""warnings"": []} ... TASK [nxos_snmp : Configuring RO/RW community string in new IPv4/SNMPv2c] ****** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: string indices must be integers fatal: [NX_OSv_Spine_11]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_55m6Ao/ansible_module_nxos_snmp_community.py\"", line 499, in \n main()\n File \""/tmp/ansible_55m6Ao/ansible_module_nxos_snmp_community.py\"", line 458, in main\n existing = get_snmp_community(module, community)\n File \""/tmp/ansible_55m6Ao/ansible_module_nxos_snmp_community.py\"", line 389, in get_snmp_community\n community_table = data['TABLE_snmp_community']['ROW_snmp_community']\nTypeError: string indices must be integers\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` No issue when configuring through the CLI: ``` NX_OSv_Spine_11(config)# snmp-server community whatever rw NX_OSv_Spine_11(config)# snmp-server community whatever use-ipv4acl authorized-snmp-nms NX_OSv_Spine_11(config)# ``` ",1,exception when using nxos snmp community issue type bug report component name nxos snmp community ansible version ansible or ansible ansible does not include the module config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible roles roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment host ubuntu target nx osv summary cf title steps to reproduce inventory hosts nms mgt ip address nx osv spine ansible host nx osv spine ansible host structure passed as provider connections ssh defined in group vars nx osv connections yml and a symbolic link in roles nxos snmp defaults points to nx osv connections nxapi transport nxapi host ansible host ansible port port http port ansible user username admin password xxxxxxxx enable secret password auth pass xxxxxxxx http or https use ssl no validate certs https validate certs role nxos snmp include vars defaults os family connections yml name configuring the acl for authorized nms in new nxos config provider connections nxapi parents ip access list authorized snmp nms lines permit ip host nms mgt ip address host ansible host log register result name configuring ro rw community string in new nxos snmp community provider connections nxapi community whatever access rw acl authorized snmp nms state present register result playbook name configuring snmp on nx os nx osv hosts nx osv roles nxos snmp expected results the community should be configured without an exception actual results task changed changed true updates warnings changed changed true updates warnings task an exception occurred during task execution to see the full traceback use vvv the error was typeerror string indices must be integers fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible ansible module nxos snmp community py line in n main n file tmp ansible ansible module nxos snmp community py line in main n existing get snmp community module community n file tmp ansible ansible module nxos snmp community py line in get snmp community n community table data ntypeerror string indices must be integers n module stdout msg module failure no issue when configuring through the cli nx osv spine config snmp server community whatever rw nx osv spine config snmp server community whatever use authorized snmp nms nx osv spine config ,1 1006,4776417136.0,IssuesEvent,2016-10-27 13:43:51,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Yum ignores notify handlers,affects_2.2 bug_report in progress P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (stable-2.2 deb1e3ebc7) last updated 2016/10/26 14:40:50 (GMT -400) lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/26 14:43:13 (GMT -400) lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/26 14:43:17 (GMT -400) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ansible 2.3.0 (devel 56086f3b9e) last updated 2016/10/26 15:06:06 (GMT -400) lib/ansible/modules/core: (detached HEAD c51ced56cc) last updated 2016/10/26 15:06:48 (GMT -400) lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/26 15:06:52 (GMT -400) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT RHEL 7.2 ##### SUMMARY ##### STEPS TO REPRODUCE 1. Build a project directory similar to the one below: ``` ./ ├── group_vars │   └── rhel │   └── vars ├── hosts ├── main.yml └── roles └── test ├── handlers │   └── main.yml └── tasks └── main.yml ``` 2. Run playbook: ansible-playbook -i hosts main.yml ./main.yml ``` - hosts: all become: yes roles: - test ``` ./roles/test/handlers/main.yml ``` - name: handler1 debug: msg: ""handler 1 run"" - name: handler2 debug: msg: ""handler 2 run"" ``` ./roles/test/tasks/main.yml ``` - block: - name: ""ensure lsof is not installed"" yum: name: lsof state: absent - name: ""Double check"" command: ""yum list installed lsof"" ignore_errors: true - name: ""this should run handlers"" yum: name: lsof state: present notify: - handler1 - handler2 - name: ""Double check"" shell: ""yum list installed lsof"" tags: - t-test ``` ##### EXPECTED RESULTS As seen in 2.1.2: ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [test : ensure lsof is not installed] ************************************* changed: [host] TASK [test : Double check] ***************************************************** fatal: [host]: FAILED! => {""changed"": true, ""cmd"": [""yum"", ""list"", ""installed"", ""lsof""], ""delta"": ""0:00:00.260975"", ""end"": ""2016-10-26 16:07:16.555671"", ""failed"": true, ""rc"": 1, ""start"": ""2016-10-26 16:07:16.294696"", ""stderr"": ""Error: No matching Packages to list"", ""stdout"": ""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos"", ""stdout_lines"": [""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos""], ""warnings"": [""Consider using yum module rather than running yum""]} ...ignoring TASK [test : this should run handlers] ***************************************** changed: [host] TASK [test : Double check] ***************************************************** changed: [host] [WARNING]: Consider using yum module rather than running yum RUNNING HANDLER [test : handler1] ********************************************** ok: [host] => { ""msg"": ""handler 1 run"" } RUNNING HANDLER [test : handler2] ********************************************** ok: [host] => { ""msg"": ""handler 2 run"" } PLAY RECAP ********************************************************************* host : ok=7 changed=3 unreachable=0 failed=0 ``` Handlers are ran ##### ACTUAL RESULTS Observed in 2.2 and 2.3: ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [test : ensure lsof is not installed] ************************************* changed: [host] TASK [test : Double check] ***************************************************** fatal: [host]: FAILED! => {""changed"": true, ""cmd"": [""yum"", ""list"", ""installed"", ""lsof""], ""delta"": ""0:00:00.270957"", ""end"": ""2016-10-26 16:05:45.987169"", ""failed"": true, ""rc"": 1, ""start"": ""2016-10-26 16:05:45.716212"", ""stderr"": ""Error: No matching Packages to list"", ""stdout"": ""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos"", ""stdout_lines"": [""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos""], ""warnings"": [""Consider using yum module rather than running yum""]} ...ignoring TASK [test : this should run handlers] ***************************************** changed: [host] TASK [test : Double check] ***************************************************** changed: [host] [WARNING]: Consider using yum module rather than running yum PLAY RECAP ********************************************************************* host : ok=5 changed=3 unreachable=0 failed=0 ``` Handlers are not run",True,"Yum ignores notify handlers - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (stable-2.2 deb1e3ebc7) last updated 2016/10/26 14:40:50 (GMT -400) lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/26 14:43:13 (GMT -400) lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/26 14:43:17 (GMT -400) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ansible 2.3.0 (devel 56086f3b9e) last updated 2016/10/26 15:06:06 (GMT -400) lib/ansible/modules/core: (detached HEAD c51ced56cc) last updated 2016/10/26 15:06:48 (GMT -400) lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/26 15:06:52 (GMT -400) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT RHEL 7.2 ##### SUMMARY ##### STEPS TO REPRODUCE 1. Build a project directory similar to the one below: ``` ./ ├── group_vars │   └── rhel │   └── vars ├── hosts ├── main.yml └── roles └── test ├── handlers │   └── main.yml └── tasks └── main.yml ``` 2. Run playbook: ansible-playbook -i hosts main.yml ./main.yml ``` - hosts: all become: yes roles: - test ``` ./roles/test/handlers/main.yml ``` - name: handler1 debug: msg: ""handler 1 run"" - name: handler2 debug: msg: ""handler 2 run"" ``` ./roles/test/tasks/main.yml ``` - block: - name: ""ensure lsof is not installed"" yum: name: lsof state: absent - name: ""Double check"" command: ""yum list installed lsof"" ignore_errors: true - name: ""this should run handlers"" yum: name: lsof state: present notify: - handler1 - handler2 - name: ""Double check"" shell: ""yum list installed lsof"" tags: - t-test ``` ##### EXPECTED RESULTS As seen in 2.1.2: ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [test : ensure lsof is not installed] ************************************* changed: [host] TASK [test : Double check] ***************************************************** fatal: [host]: FAILED! => {""changed"": true, ""cmd"": [""yum"", ""list"", ""installed"", ""lsof""], ""delta"": ""0:00:00.260975"", ""end"": ""2016-10-26 16:07:16.555671"", ""failed"": true, ""rc"": 1, ""start"": ""2016-10-26 16:07:16.294696"", ""stderr"": ""Error: No matching Packages to list"", ""stdout"": ""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos"", ""stdout_lines"": [""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos""], ""warnings"": [""Consider using yum module rather than running yum""]} ...ignoring TASK [test : this should run handlers] ***************************************** changed: [host] TASK [test : Double check] ***************************************************** changed: [host] [WARNING]: Consider using yum module rather than running yum RUNNING HANDLER [test : handler1] ********************************************** ok: [host] => { ""msg"": ""handler 1 run"" } RUNNING HANDLER [test : handler2] ********************************************** ok: [host] => { ""msg"": ""handler 2 run"" } PLAY RECAP ********************************************************************* host : ok=7 changed=3 unreachable=0 failed=0 ``` Handlers are ran ##### ACTUAL RESULTS Observed in 2.2 and 2.3: ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [test : ensure lsof is not installed] ************************************* changed: [host] TASK [test : Double check] ***************************************************** fatal: [host]: FAILED! => {""changed"": true, ""cmd"": [""yum"", ""list"", ""installed"", ""lsof""], ""delta"": ""0:00:00.270957"", ""end"": ""2016-10-26 16:05:45.987169"", ""failed"": true, ""rc"": 1, ""start"": ""2016-10-26 16:05:45.716212"", ""stderr"": ""Error: No matching Packages to list"", ""stdout"": ""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos"", ""stdout_lines"": [""Loaded plugins: amazon-id, rhui-lb, search-disabled-repos""], ""warnings"": [""Consider using yum module rather than running yum""]} ...ignoring TASK [test : this should run handlers] ***************************************** changed: [host] TASK [test : Double check] ***************************************************** changed: [host] [WARNING]: Consider using yum module rather than running yum PLAY RECAP ********************************************************************* host : ok=5 changed=3 unreachable=0 failed=0 ``` Handlers are not run",1,yum ignores notify handlers issue type bug report component name yum ansible version ansible stable last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific rhel summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used build a project directory similar to the one below ├── group vars │   └── rhel │   └── vars ├── hosts ├── main yml └── roles └── test ├── handlers │   └── main yml └── tasks └── main yml run playbook ansible playbook i hosts main yml main yml hosts all become yes roles test roles test handlers main yml name debug msg handler run name debug msg handler run roles test tasks main yml block name ensure lsof is not installed yum name lsof state absent name double check command yum list installed lsof ignore errors true name this should run handlers yum name lsof state present notify name double check shell yum list installed lsof tags t test expected results as seen in play task ok task changed task fatal failed changed true cmd delta end failed true rc start stderr error no matching packages to list stdout loaded plugins amazon id rhui lb search disabled repos stdout lines warnings ignoring task changed task changed consider using yum module rather than running yum running handler ok msg handler run running handler ok msg handler run play recap host ok changed unreachable failed handlers are ran actual results observed in and play task ok task changed task fatal failed changed true cmd delta end failed true rc start stderr error no matching packages to list stdout loaded plugins amazon id rhui lb search disabled repos stdout lines warnings ignoring task changed task changed consider using yum module rather than running yum play recap host ok changed unreachable failed handlers are not run,1 1121,4989605802.0,IssuesEvent,2016-12-08 12:28:56,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container does not allow to disable container logging via log_driver,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 28feba2fb3) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/core: (detached HEAD ae6992bf8c) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/extras: (detached HEAD afd0b23836) last updated 2016/09/14 12:25:32 (GMT +200) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY It is not possible to disable container logging, although 'none' is a valid option for docker run. > none Disables any logging for the container. docker logs won’t be available with this driver. > Source: https://docs.docker.com/engine/admin/logging/overview/ ##### STEPS TO REPRODUCE ``` docker_container: [...] log_driver: none ``` ##### EXPECTED RESULTS Container with disabled logging started. ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""value of log_driver must be one of: json-file,syslog,journald,gelf,fluentd,awslogs,splunk, got: none""} ``` ",True,"docker_container does not allow to disable container logging via log_driver - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 28feba2fb3) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/core: (detached HEAD ae6992bf8c) last updated 2016/09/14 12:25:32 (GMT +200) lib/ansible/modules/extras: (detached HEAD afd0b23836) last updated 2016/09/14 12:25:32 (GMT +200) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY It is not possible to disable container logging, although 'none' is a valid option for docker run. > none Disables any logging for the container. docker logs won’t be available with this driver. > Source: https://docs.docker.com/engine/admin/logging/overview/ ##### STEPS TO REPRODUCE ``` docker_container: [...] log_driver: none ``` ##### EXPECTED RESULTS Container with disabled logging started. ##### ACTUAL RESULTS ``` FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""value of log_driver must be one of: json-file,syslog,journald,gelf,fluentd,awslogs,splunk, got: none""} ``` ",1,docker container does not allow to disable container logging via log driver issue type bug report component name docker container ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt configuration n a os environment n a summary it is not possible to disable container logging although none is a valid option for docker run none disables any logging for the container docker logs won’t be available with this driver source steps to reproduce docker container log driver none expected results container with disabled logging started actual results failed changed false failed true msg value of log driver must be one of json file syslog journald gelf fluentd awslogs splunk got none ,1 1772,6575051225.0,IssuesEvent,2017-09-11 14:53:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,gem - a change in user_install doesn't get picked up as a change,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME gem: ##### ANSIBLE VERSION ``` 10:32 $ ansible --version ansible 2.1.2.0 config file = /Users/chrisdorer/projects/ansible_onert/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Mac OS X --> Ubuntu 14.04 LTS ##### SUMMARY You'll only get one change, you should get two changes, take a peek at gem show blah -d. ##### STEPS TO REPRODUCE ``` gem: name=blah ($> vagrant provision) ($> vagrant ssh) ($> bundler --help and gives back **The program 'bundle' is currently not installed. To run 'bundle' please ask your administrator to install the package 'bundler'**) gem: name=blah user_install=no ($> vagrant provision) ($> vagrant ssh) ($> bundler --help and gives back **The program 'bundle' is currently not installed. To run 'bundle' please ask your administrator to install the package 'bundler'**) gem: name=blah user_install=no ``` ##### EXPECTED RESULTS ($> vagrant provision) ($> vagrant ssh) ($> bundler --help and gives back the ruby bundler man page) ##### ACTUAL RESULTS see above **STEPS TO REPRODUCE** ``` ``` ",True,"gem - a change in user_install doesn't get picked up as a change - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME gem: ##### ANSIBLE VERSION ``` 10:32 $ ansible --version ansible 2.1.2.0 config file = /Users/chrisdorer/projects/ansible_onert/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Mac OS X --> Ubuntu 14.04 LTS ##### SUMMARY You'll only get one change, you should get two changes, take a peek at gem show blah -d. ##### STEPS TO REPRODUCE ``` gem: name=blah ($> vagrant provision) ($> vagrant ssh) ($> bundler --help and gives back **The program 'bundle' is currently not installed. To run 'bundle' please ask your administrator to install the package 'bundler'**) gem: name=blah user_install=no ($> vagrant provision) ($> vagrant ssh) ($> bundler --help and gives back **The program 'bundle' is currently not installed. To run 'bundle' please ask your administrator to install the package 'bundler'**) gem: name=blah user_install=no ``` ##### EXPECTED RESULTS ($> vagrant provision) ($> vagrant ssh) ($> bundler --help and gives back the ruby bundler man page) ##### ACTUAL RESULTS see above **STEPS TO REPRODUCE** ``` ``` ",1,gem a change in user install doesn t get picked up as a change issue type bug report component name gem ansible version ansible version ansible config file users chrisdorer projects ansible onert ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific mac os x ubuntu lts summary you ll only get one change you should get two changes take a peek at gem show blah d steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used gem name blah vagrant provision vagrant ssh bundler help and gives back the program bundle is currently not installed to run bundle please ask your administrator to install the package bundler gem name blah user install no vagrant provision vagrant ssh bundler help and gives back the program bundle is currently not installed to run bundle please ask your administrator to install the package bundler gem name blah user install no expected results vagrant provision vagrant ssh bundler help and gives back the ruby bundler man page actual results see above steps to reproduce ,1 806,4425654897.0,IssuesEvent,2016-08-16 16:01:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt_key always fails to import a subkey,bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt_key ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT host OS: Ubuntu 14.04 controlled nodes' OS: Ubuntu 14.04 ##### SUMMARY Importing a (sign only) subkey with apt_key always fails, however the actual keyring gets created and contains the correct (sub)key. ##### STEPS TO REPRODUCE ``` $ cat >> aptrepo.yml < id=A254F5F0 keyserver=keyserver.ubuntu.com EOF $ ansible-playbook aptrepo.yml ``` ##### EXPECTED RESULTS The specified (sub)key gets successfuly imported, ansible returns exit code 0 (success) ##### ACTUAL RESULTS ``` fatal: [saceph-osd2.maas]: FAILED! => {""changed"": false, ""failed"": true, ""id"": ""A254F5F0"", ""msg"": ""key does not seem to have been added""} fatal: [saceph-osd1.maas]: FAILED! => {""changed"": false, ""failed"": true, ""id"": ""A254F5F0"", ""msg"": ""key does not seem to have been added""} fatal: [saceph-osd3.maas]: FAILED! => {""changed"": false, ""failed"": true, ""id"": ""A254F5F0"", ""msg"": ""key does not seem to have been added""} However the key has been successfully imported: $ ansible all -m shell -a 'apt-key --keyring /etc/apt/trusted.gpg.d/ceph.gpg adv --list-public-keys | grep -e A254F5F0' saceph-osd2.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd1.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd3.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] ``` ",True,"apt_key always fails to import a subkey - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt_key ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT host OS: Ubuntu 14.04 controlled nodes' OS: Ubuntu 14.04 ##### SUMMARY Importing a (sign only) subkey with apt_key always fails, however the actual keyring gets created and contains the correct (sub)key. ##### STEPS TO REPRODUCE ``` $ cat >> aptrepo.yml < id=A254F5F0 keyserver=keyserver.ubuntu.com EOF $ ansible-playbook aptrepo.yml ``` ##### EXPECTED RESULTS The specified (sub)key gets successfuly imported, ansible returns exit code 0 (success) ##### ACTUAL RESULTS ``` fatal: [saceph-osd2.maas]: FAILED! => {""changed"": false, ""failed"": true, ""id"": ""A254F5F0"", ""msg"": ""key does not seem to have been added""} fatal: [saceph-osd1.maas]: FAILED! => {""changed"": false, ""failed"": true, ""id"": ""A254F5F0"", ""msg"": ""key does not seem to have been added""} fatal: [saceph-osd3.maas]: FAILED! => {""changed"": false, ""failed"": true, ""id"": ""A254F5F0"", ""msg"": ""key does not seem to have been added""} However the key has been successfully imported: $ ansible all -m shell -a 'apt-key --keyring /etc/apt/trusted.gpg.d/ceph.gpg adv --list-public-keys | grep -e A254F5F0' saceph-osd2.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd1.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd3.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] ``` ",1,apt key always fails to import a subkey issue type bug report component name apt key ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific host os ubuntu controlled nodes os ubuntu summary importing a sign only subkey with apt key always fails however the actual keyring gets created and contains the correct sub key steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used cat aptrepo yml eof hosts all become true tasks name import custom repo keyring apt key id keyserver keyserver ubuntu com eof ansible playbook aptrepo yml expected results the specified sub key gets successfuly imported ansible returns exit code success actual results fatal failed changed false failed true id msg key does not seem to have been added fatal failed changed false failed true id msg key does not seem to have been added fatal failed changed false failed true id msg key does not seem to have been added however the key has been successfully imported ansible all m shell a apt key keyring etc apt trusted gpg d ceph gpg adv list public keys grep e saceph maas success rc sub saceph maas success rc sub saceph maas success rc sub ,1 1740,6574888853.0,IssuesEvent,2017-09-11 14:24:12,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,systemd module docs poorly formatted,affects_2.3 docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME system/systemd.py ##### ANSIBLE VERSION ``` public docs as of 2016-10-24 ``` ##### SUMMARY status in 'return values' is illegible in the docs at: https://docs.ansible.com/ansible/systemd_module.html ##### STEPS TO REPRODUCE Go to page ##### EXPECTED RESULTS Not what I see. ##### ACTUAL RESULTS ![screenshot from 2016-10-24 10-55-46](https://cloud.githubusercontent.com/assets/5208768/19650781/7e0726e0-99d8-11e6-9d34-3074d63e2d89.png) ",True,"systemd module docs poorly formatted - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME system/systemd.py ##### ANSIBLE VERSION ``` public docs as of 2016-10-24 ``` ##### SUMMARY status in 'return values' is illegible in the docs at: https://docs.ansible.com/ansible/systemd_module.html ##### STEPS TO REPRODUCE Go to page ##### EXPECTED RESULTS Not what I see. ##### ACTUAL RESULTS ![screenshot from 2016-10-24 10-55-46](https://cloud.githubusercontent.com/assets/5208768/19650781/7e0726e0-99d8-11e6-9d34-3074d63e2d89.png) ",1,systemd module docs poorly formatted issue type documentation report component name system systemd py ansible version public docs as of summary status in return values is illegible in the docs at steps to reproduce go to page expected results not what i see actual results ,1 1883,6577516314.0,IssuesEvent,2017-09-12 01:27:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Fact gathering randomly fails on a VM,affects_2.0 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME setup module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] inventory=hosts host_key_checking=false nocows=1 fact_caching=jsonfile fact_caching_connection=facts/ ansible_managed = Ansible managed: {file} modified on %Y-%m-%d by {uid} on {host} scp_if_ssh=true timeout=20 pipelining=true [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=60s ``` ##### OS / ENVIRONMENT HOST: Ubuntu 15.10 Linux laptappy 4.2.0-34-generic #39-Ubuntu SMP Thu Mar 10 22:13:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux VM: Centos 7 Linux gl-docker 4.4.3-1.el7.elrepo.x86_64 #1 SMP Thu Feb 25 17:09:04 EST 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY Trying to gather facts from a newly spun up vagrant box fails. I am not using the ansible vagrant provisioner, I am running ansible after the fact. I can connect all night and day with the connection string that ansible verbose output uses. After a successful fact gathering, I can run ansible commands all night and day against my node(s). Fact gathering is the only thing that chokes randomly. ##### STEPS TO REPRODUCE The command I run ``` cd ansible ; ansible vagrant -m setup -a ""fact_path=facts"" -vvvv ; cd .. ``` Inventory. Regardless if I use the private key or password, the results are the same. ``` gl-docker ansible_connection=ssh ansible_host=127.0.0.1 ansible_port=2233 ansible_user=vagrant ansible_ssh_private_key_file=../.vagrant/machines/gl-docker/virtualbox/private_key [vagrant] gl-docker ``` I can run this command and everything appears to exit cleanly. I copy/pasted this string from the ansible verbose output. ``` for i in {1..200} ; do ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 'echo -e ""#########\nHEY IM IN\n#########"" ; exit' ; echo $?; echo $i ;do ``` I've also set the following variable to successfully and repeatedly run the setup module on the host itself. ``` export ANSIBLE_KEEP_REMOTE_FILES=1 ``` ##### EXPECTED RESULTS I expect fact gathering to return successfully each and every time. ##### ACTUAL RESULTS Fact gathering randomly fails. Here are two subsequent fact gathering runs. ``` Using /opt/work-repos/docker-microservice-devenv/ansible/ansible.cfg as config file <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776 `"" )'""'""'' <127.0.0.1> PUT /tmp/tmpwzYwQD TO /home/vagrant/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776/setup <127.0.0.1> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776/setup; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776/"" > /dev/null 2>&1'""'""'' gl-docker | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""setup"" }, ""module_stderr"": """", ""module_stdout"": ""{\""invocation\"": {\""module_args\"": {\""filter\"": \""*\"", \""fact_path\"": \""facts\""}}, \""changed\"": false, \""_ansible_verbose_override\"": true, \""ansible_facts\"": {\""ansible_product_serial\"": \""NA\"", \""ansible_form_factor\"": \""Other\"", \""ansible_product_version\"": \""1.2\"", \""ansible_fips\"": false, \""ansible_service_mgr\"": \""systemd\"", \""ansible_user_id\"": \""vagrant\"", \""module_setup\"": true, \""ansible_memtotal_mb\"": 2000, \""ansible_ssh_host_key_rsa_public\"": \""AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ8cfLKMpYB/mrhS3DuBTB6cTryHUzLHJ1gIk9Ro+QhbidvnAb9Br4182qytb2fmSB0kn+I4WbZBSDj2Kv9uicL8BVEyoMZtAsDZ0cHBsN5Su+Dk5mijG4PplAM8CjU6BxvhGZJ+lxuT4IMMCEMua7SmVVITaiebVbcTi7x61WsuOaKAAe+D9xaHcY0tlKvr0XaZxnyxtscbHLEc1vi6WUYlM1uLXdnMycE5DYlmsl5FfCkrtFeEG/xdbbcmBlrSl7WUwyVY0w1GPGBQFatEHf8PJhYTELNWVHR1rCMMP8Q/uiT9E/UkQHASS3XgRe+W6FqgD6pO6mOHkLXjoFVtsD\"", \""ansible_ssh_host_key_ecdsa_public\"": \""AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDB17PdGWfCxEJRNI1dj88lwt47Ah4eneGboTvG8V2Rctf7nGAOdLrB+//YrHPyb7mSiJp58/p+w8qkKy4sFQ38=\"", \""ansible_distribution_version\"": \""7.2.1511\"", \""ansible_domain\"": \""localdomain\"", \""ansible_user_shell\"": \""/bin/bash\"", \""ansible_date_time\"": {\""weekday_number\"": \""5\"", \""iso8601_basic_short\"": \""20160325T132842\"", \""tz\"": \""UTC\"", \""weeknumber\"": \""12\"", \""hour\"": \""13\"", \""year\"": \""2016\"", \""minute\"": \""28\"", \""tz_offset\"": \""+0000\"", \""month\"": \""03\"", \""epoch\"": \""1458912522\"", \""iso8601_micro\"": \""2016-03-25T13:28:42.761267Z\"", \""weekday\"": \""Friday\"", \""time\"": \""13:28:42\"", \""date\"": \""2016-03-25\"", \""iso8601\"": \""2016-03-25T13:28:42Z\"", \""day\"": \""25\"", \""iso8601_basic\"": \""20160325T132842761179\"", \""second\"": \""42\""}, \""ansible_ssh_host_key_ed25519_public\"": \""AAAAC3NzaC1lZDI1NTE5AAAAIHtYETIJC0EFGF1Mam7pYjiVxAVTNQp6yqADQoSS13bP\"", \""ansible_processor_cores\"": 1, \""ansible_virtualization_role\"": \""guest\"", \""ansible_env\"": {\""LESSOPEN\"": \""||/usr/bin/lesspipe.sh %s\"", \""SSH_CLIENT\"": \""10.0.2.2 57776 22\"", \""SELINUX_USE_CURRENT_RANGE\"": \""\"", \""LOGNAME\"": \""vagrant\"", \""USER\"": \""vagrant\"", \""PATH\"": \""/usr/local/bin:/usr/bin\"", \""HOME\"": \""/home/vagrant\"", \""LANG\"": \""en_US.UTF-8\"", \""TERM\"": \""xterm-256color\"", \""SHELL\"": \""/bin/bash\"", \""SHLVL\"": \""2\"", \""LC_ALL\"": \""en_US.UTF-8\"", \""XDG_RUNTIME_DIR\"": \""/run/user/1000\"", \""SELINUX_ROLE_REQUESTED\"": \""\"", \""QT_GRAPHICSSYSTEM_CHECKED\"": \""1\"", \""XDG_SESSION_ID\"": \""11\"", \""_\"": \""/usr/bin/python\"", \""LC_MESSAGES\"": \""en_US.UTF-8\"", \""SSH_TTY\"": \""/dev/pts/0\"", \""SELINUX_LEVEL_REQUESTED\"": \""\"", \""PWD\"": \""/home/vagrant\"", \""MAIL\"": \""/var/mail/vagrant\"", \""SSH_CONNECTION\"": \""10.0.2.2 57776 10.0.2.15 22\""}, \""ansible_processor_vcpus\"": 1, \""ansible_docker0\"": {\""macaddress\"": \""02:42:d0:a0:e3:30\"", \""interfaces\"": [], \""mtu\"": 1500, \""device\"": \""docker0\"", \""promisc\"": false, \""stp\"": false, \""ipv4\"": {\""broadcast\"": \""global\"", \""netmask\"": \""255.255.0.0\"", \""network\"": \""172.17.0.0\"", \""address\"": \""172.17.0.1\""}, \""active\"": false, \""type\"": \""bridge\"", \""id\"": \""8000.0242d0a0e330\""}, \""ansible_bios_version\"": \""VirtualBox\"", \""ansible_processor\"": [\""GenuineIntel\"", \""Intel(R) Core(TM) i7-4578U CPU @ 3.00GHz\""], \""ansible_virtualization_type\"": \""virtualbox\"", \""ansible_lo\"": {\""mtu\"": 65536, \""device\"": \""lo\"", \""promisc\"": false, \""ipv4\"": {\""broadcast\"": \""host\"", \""netmask\"": \""255.0.0.0\"", \""network\"": \""127.0.0.0\"", \""address\"": \""127.0.0.1\""}, \""ipv6\"": [{\""scope\"": \""host\"", \""prefix\"": \""128\"", \""address\"": \""::1\""}], \""active\"": true, \""type\"": \""loopback\""}, \""ansible_userspace_bits\"": \""64\"", \""ansible_architecture\"": \""x86_64\"", \""ansible_default_ipv4\"": {\""macaddress\"": \""08:00:27:07:5e:92\"", \""network\"": \""10.0.2.0\"", \""mtu\"": 1500, \""broadcast\"": \""10.0.2.255\"", \""alias\"": \""enp0s3\"", \""netmask\"": \""255.255.255.0\"", \""address\"": \""10.0.2.15\"", \""interface\"": \""enp0s3\"", \""type\"": \""ether\"", \""gateway\"": \""10.0.2.2\""}, \""ansible_swapfree_mb\"": 1015, \""ansible_default_ipv6\"": {}, \""ansible_distribution_release\"": \""Core\"", \""ansible_system_vendor\"": \""innotek GmbH\"", \""ansible_os_family\"": \""RedHat\"", \""ansible_cmdline\"": {\""BOOT_IMAGE\"": \""/vmlinuz-4.4.3-1.el7.elrepo.x86_64\"", \""quiet\"": true, \""rhgb\"": true, \""rd.lvm.lv\"": \""centos/swap\"", \""crashkernel\"": \""auto\"", \""ro\"": true, \""root\"": \""/dev/mapper/centos-root\""}, \""ansible_mounts\"": [{\""uuid\"": \""319365c2-ad4a-4a6b-986a-0b050e96624c\"", \""size_total\"": 8986296320, \""mount\"": \""/\"", \""size_available\"": 6826946560, \""fstype\"": \""xfs\"", \""device\"": \""/dev/mapper/centos-root\"", \""options\"": \""rw,seclabel,relatime,attr2,inode64,noquota\"""", ""msg"": ""MODULE FAILURE"", ""parsed"": false } ########################################################### ########################################################### Using /opt/work-repos/docker-microservice-devenv/ansible/ansible.cfg as config file <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005 `"" )'""'""'' <127.0.0.1> PUT /tmp/tmpcA5aGq TO /home/vagrant/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005/setup <127.0.0.1> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005/setup; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005/"" > /dev/null 2>&1'""'""'' gl-docker | SUCCESS => { ""ansible_facts"": { ""ansible_all_ipv4_addresses"": [ ""172.17.0.1"", ""10.0.2.15"", ""10.100.101.111"" ], ""ansible_all_ipv6_addresses"": [ ""fe80::a00:27ff:fe07:5e92"", ""fe80::a00:27ff:fe63:1c7c"" ], ""ansible_architecture"": ""x86_64"", ""ansible_bios_date"": ""12/01/2006"", ""ansible_bios_version"": ""VirtualBox"", ""ansible_cmdline"": { ""BOOT_IMAGE"": ""/vmlinuz-4.4.3-1.el7.elrepo.x86_64"", ""crashkernel"": ""auto"", ""quiet"": true, ""rd.lvm.lv"": ""centos/swap"", ""rhgb"": true, ""ro"": true, ""root"": ""/dev/mapper/centos-root"" }, ""ansible_date_time"": { ""date"": ""2016-03-25"", ""day"": ""25"", ""epoch"": ""1458912524"", ""hour"": ""13"", ""iso8601"": ""2016-03-25T13:28:44Z"", ""iso8601_basic"": ""20160325T132844056072"", ""iso8601_basic_short"": ""20160325T132844"", ""iso8601_micro"": ""2016-03-25T13:28:44.056144Z"", ""minute"": ""28"", ""month"": ""03"", ""second"": ""44"", ""time"": ""13:28:44"", ""tz"": ""UTC"", ""tz_offset"": ""+0000"", ""weekday"": ""Friday"", ""weekday_number"": ""5"", ""weeknumber"": ""12"", ""year"": ""2016"" }, ""ansible_default_ipv4"": { ""address"": ""10.0.2.15"", ""alias"": ""enp0s3"", ""broadcast"": ""10.0.2.255"", ""gateway"": ""10.0.2.2"", ""interface"": ""enp0s3"", ""macaddress"": ""08:00:27:07:5e:92"", ""mtu"": 1500, ""netmask"": ""255.255.255.0"", ""network"": ""10.0.2.0"", ""type"": ""ether"" }, ""ansible_default_ipv6"": {}, ""ansible_devices"": { ""sda"": { ""holders"": [], ""host"": ""IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)"", ""model"": ""VBOX HARDDISK"", ""partitions"": { ""sda1"": { ""sectors"": ""1024000"", ""sectorsize"": 512, ""size"": ""500.00 MB"", ""start"": ""2048"" }, ""sda2"": { ""sectors"": ""19740672"", ""sectorsize"": 512, ""size"": ""9.41 GB"", ""start"": ""1026048"" } }, ""removable"": ""0"", ""rotational"": ""1"", ""scheduler_mode"": ""deadline"", ""sectors"": ""20766720"", ""sectorsize"": ""512"", ""size"": ""9.90 GB"", ""support_discard"": ""0"", ""vendor"": ""ATA"" } }, ""ansible_distribution"": ""CentOS"", ""ansible_distribution_major_version"": ""7"", ""ansible_distribution_release"": ""Core"", ""ansible_distribution_version"": ""7.2.1511"", ""ansible_dns"": { ""nameservers"": [ ""10.0.2.3"" ] }, ""ansible_docker0"": { ""active"": false, ""device"": ""docker0"", ""id"": ""8000.0242d0a0e330"", ""interfaces"": [], ""ipv4"": { ""address"": ""172.17.0.1"", ""broadcast"": ""global"", ""netmask"": ""255.255.0.0"", ""network"": ""172.17.0.0"" }, ""macaddress"": ""02:42:d0:a0:e3:30"", ""mtu"": 1500, ""promisc"": false, ""stp"": false, ""type"": ""bridge"" }, ""ansible_domain"": ""localdomain"", ""ansible_enp0s3"": { ""active"": true, ""device"": ""enp0s3"", ""ipv4"": { ""address"": ""10.0.2.15"", ""broadcast"": ""10.0.2.255"", ""netmask"": ""255.255.255.0"", ""network"": ""10.0.2.0"" }, ""ipv6"": [ { ""address"": ""fe80::a00:27ff:fe07:5e92"", ""prefix"": ""64"", ""scope"": ""link"" } ], ""macaddress"": ""08:00:27:07:5e:92"", ""module"": ""e1000"", ""mtu"": 1500, ""pciid"": ""0000:00:03.0"", ""promisc"": false, ""type"": ""ether"" }, ""ansible_enp0s8"": { ""active"": true, ""device"": ""enp0s8"", ""ipv4"": { ""address"": ""10.100.101.111"", ""broadcast"": ""10.100.101.255"", ""netmask"": ""255.255.255.0"", ""network"": ""10.100.101.0"" }, ""ipv6"": [ { ""address"": ""fe80::a00:27ff:fe63:1c7c"", ""prefix"": ""64"", ""scope"": ""link"" } ], ""macaddress"": ""08:00:27:63:1c:7c"", ""module"": ""e1000"", ""mtu"": 1500, ""pciid"": ""0000:00:08.0"", ""promisc"": false, ""type"": ""ether"" }, ""ansible_env"": { ""HOME"": ""/home/vagrant"", ""LANG"": ""en_US.UTF-8"", ""LC_ALL"": ""en_US.UTF-8"", ""LC_MESSAGES"": ""en_US.UTF-8"", ""LESSOPEN"": ""||/usr/bin/lesspipe.sh %s"", ""LOGNAME"": ""vagrant"", ""MAIL"": ""/var/mail/vagrant"", ""PATH"": ""/usr/local/bin:/usr/bin"", ""PWD"": ""/home/vagrant"", ""QT_GRAPHICSSYSTEM_CHECKED"": ""1"", ""SELINUX_LEVEL_REQUESTED"": """", ""SELINUX_ROLE_REQUESTED"": """", ""SELINUX_USE_CURRENT_RANGE"": """", ""SHELL"": ""/bin/bash"", ""SHLVL"": ""2"", ""SSH_CLIENT"": ""10.0.2.2 57776 22"", ""SSH_CONNECTION"": ""10.0.2.2 57776 10.0.2.15 22"", ""SSH_TTY"": ""/dev/pts/0"", ""TERM"": ""xterm-256color"", ""USER"": ""vagrant"", ""XDG_RUNTIME_DIR"": ""/run/user/1000"", ""XDG_SESSION_ID"": ""11"", ""_"": ""/usr/bin/python"" }, ""ansible_fips"": false, ""ansible_form_factor"": ""Other"", ""ansible_fqdn"": ""localhost.localdomain"", ""ansible_hostname"": ""gl-docker"", ""ansible_interfaces"": [ ""lo"", ""docker0"", ""enp0s3"", ""enp0s8"" ], ""ansible_kernel"": ""4.4.3-1.el7.elrepo.x86_64"", ""ansible_lo"": { ""active"": true, ""device"": ""lo"", ""ipv4"": { ""address"": ""127.0.0.1"", ""broadcast"": ""host"", ""netmask"": ""255.0.0.0"", ""network"": ""127.0.0.0"" }, ""ipv6"": [ { ""address"": ""::1"", ""prefix"": ""128"", ""scope"": ""host"" } ], ""mtu"": 65536, ""promisc"": false, ""type"": ""loopback"" }, ""ansible_lsb"": { ""codename"": ""Core"", ""description"": ""CentOS Linux release 7.2.1511 (Core)"", ""id"": ""CentOS"", ""major_release"": ""7"", ""release"": ""7.2.1511"" }, ""ansible_machine"": ""x86_64"", ""ansible_machine_id"": ""69c2e9ff6b3b4594b1a26db25287da79"", ""ansible_memfree_mb"": 1629, ""ansible_memory_mb"": { ""nocache"": { ""free"": 1790, ""used"": 210 }, ""real"": { ""free"": 1629, ""total"": 2000, ""used"": 371 }, ""swap"": { ""cached"": 0, ""free"": 1015, ""total"": 1015, ""used"": 0 } }, ""ansible_memtotal_mb"": 2000, ""ansible_mounts"": [ { ""device"": ""/dev/mapper/centos-root"", ""fstype"": ""xfs"", ""mount"": ""/"", ""options"": ""rw,seclabel,relatime,attr2,inode64,noquota"", ""size_available"": 6826946560, ""size_total"": 8986296320, ""uuid"": ""319365c2-ad4a-4a6b-986a-0b050e96624c"" }, { ""device"": ""/dev/sda1"", ""fstype"": ""xfs"", ""mount"": ""/boot"", ""options"": ""rw,seclabel,relatime,attr2,inode64,noquota"", ""size_available"": 320561152, ""size_total"": 520794112, ""uuid"": ""9a2e2b29-a60f-484b-be03-30df74cc87a4"" } ], ""ansible_nodename"": ""gl-docker"", ""ansible_os_family"": ""RedHat"", ""ansible_pkg_mgr"": ""yum"", ""ansible_processor"": [ ""GenuineIntel"", ""Intel(R) Core(TM) i7-4578U CPU @ 3.00GHz"" ], ""ansible_processor_cores"": 1, ""ansible_processor_count"": 1, ""ansible_processor_threads_per_core"": 1, ""ansible_processor_vcpus"": 1, ""ansible_product_name"": ""VirtualBox"", ""ansible_product_serial"": ""NA"", ""ansible_product_uuid"": ""NA"", ""ansible_product_version"": ""1.2"", ""ansible_python_version"": ""2.7.5"", ""ansible_selinux"": { ""config_mode"": ""permissive"", ""mode"": ""permissive"", ""policyvers"": 30, ""status"": ""enabled"", ""type"": ""targeted"" }, ""ansible_service_mgr"": ""systemd"", ""ansible_ssh_host_key_ecdsa_public"": ""AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDB17PdGWfCxEJRNI1dj88lwt47Ah4eneGboTvG8V2Rctf7nGAOdLrB+//YrHPyb7mSiJp58/p+w8qkKy4sFQ38="", ""ansible_ssh_host_key_ed25519_public"": ""AAAAC3NzaC1lZDI1NTE5AAAAIHtYETIJC0EFGF1Mam7pYjiVxAVTNQp6yqADQoSS13bP"", ""ansible_ssh_host_key_rsa_public"": ""AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ8cfLKMpYB/mrhS3DuBTB6cTryHUzLHJ1gIk9Ro+QhbidvnAb9Br4182qytb2fmSB0kn+I4WbZBSDj2Kv9uicL8BVEyoMZtAsDZ0cHBsN5Su+Dk5mijG4PplAM8CjU6BxvhGZJ+lxuT4IMMCEMua7SmVVITaiebVbcTi7x61WsuOaKAAe+D9xaHcY0tlKvr0XaZxnyxtscbHLEc1vi6WUYlM1uLXdnMycE5DYlmsl5FfCkrtFeEG/xdbbcmBlrSl7WUwyVY0w1GPGBQFatEHf8PJhYTELNWVHR1rCMMP8Q/uiT9E/UkQHASS3XgRe+W6FqgD6pO6mOHkLXjoFVtsD"", ""ansible_swapfree_mb"": 1015, ""ansible_swaptotal_mb"": 1015, ""ansible_system"": ""Linux"", ""ansible_system_vendor"": ""innotek GmbH"", ""ansible_uptime_seconds"": 2173, ""ansible_user_dir"": ""/home/vagrant"", ""ansible_user_gecos"": ""vagrant"", ""ansible_user_gid"": 1000, ""ansible_user_id"": ""vagrant"", ""ansible_user_shell"": ""/bin/bash"", ""ansible_user_uid"": 1000, ""ansible_userspace_architecture"": ""x86_64"", ""ansible_userspace_bits"": ""64"", ""ansible_virtualization_role"": ""guest"", ""ansible_virtualization_type"": ""virtualbox"", ""module_setup"": true }, ""changed"": false, ""invocation"": { ""module_args"": { ""fact_path"": ""facts"", ""filter"": ""*"" }, ""module_name"": ""setup"" } } ``` ",True,"Fact gathering randomly fails on a VM - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME setup module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] inventory=hosts host_key_checking=false nocows=1 fact_caching=jsonfile fact_caching_connection=facts/ ansible_managed = Ansible managed: {file} modified on %Y-%m-%d by {uid} on {host} scp_if_ssh=true timeout=20 pipelining=true [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=60s ``` ##### OS / ENVIRONMENT HOST: Ubuntu 15.10 Linux laptappy 4.2.0-34-generic #39-Ubuntu SMP Thu Mar 10 22:13:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux VM: Centos 7 Linux gl-docker 4.4.3-1.el7.elrepo.x86_64 #1 SMP Thu Feb 25 17:09:04 EST 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY Trying to gather facts from a newly spun up vagrant box fails. I am not using the ansible vagrant provisioner, I am running ansible after the fact. I can connect all night and day with the connection string that ansible verbose output uses. After a successful fact gathering, I can run ansible commands all night and day against my node(s). Fact gathering is the only thing that chokes randomly. ##### STEPS TO REPRODUCE The command I run ``` cd ansible ; ansible vagrant -m setup -a ""fact_path=facts"" -vvvv ; cd .. ``` Inventory. Regardless if I use the private key or password, the results are the same. ``` gl-docker ansible_connection=ssh ansible_host=127.0.0.1 ansible_port=2233 ansible_user=vagrant ansible_ssh_private_key_file=../.vagrant/machines/gl-docker/virtualbox/private_key [vagrant] gl-docker ``` I can run this command and everything appears to exit cleanly. I copy/pasted this string from the ansible verbose output. ``` for i in {1..200} ; do ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 'echo -e ""#########\nHEY IM IN\n#########"" ; exit' ; echo $?; echo $i ;do ``` I've also set the following variable to successfully and repeatedly run the setup module on the host itself. ``` export ANSIBLE_KEEP_REMOTE_FILES=1 ``` ##### EXPECTED RESULTS I expect fact gathering to return successfully each and every time. ##### ACTUAL RESULTS Fact gathering randomly fails. Here are two subsequent fact gathering runs. ``` Using /opt/work-repos/docker-microservice-devenv/ansible/ansible.cfg as config file <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776 `"" )'""'""'' <127.0.0.1> PUT /tmp/tmpwzYwQD TO /home/vagrant/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776/setup <127.0.0.1> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776/setup; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1458912522.44-162920354534776/"" > /dev/null 2>&1'""'""'' gl-docker | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""setup"" }, ""module_stderr"": """", ""module_stdout"": ""{\""invocation\"": {\""module_args\"": {\""filter\"": \""*\"", \""fact_path\"": \""facts\""}}, \""changed\"": false, \""_ansible_verbose_override\"": true, \""ansible_facts\"": {\""ansible_product_serial\"": \""NA\"", \""ansible_form_factor\"": \""Other\"", \""ansible_product_version\"": \""1.2\"", \""ansible_fips\"": false, \""ansible_service_mgr\"": \""systemd\"", \""ansible_user_id\"": \""vagrant\"", \""module_setup\"": true, \""ansible_memtotal_mb\"": 2000, \""ansible_ssh_host_key_rsa_public\"": \""AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ8cfLKMpYB/mrhS3DuBTB6cTryHUzLHJ1gIk9Ro+QhbidvnAb9Br4182qytb2fmSB0kn+I4WbZBSDj2Kv9uicL8BVEyoMZtAsDZ0cHBsN5Su+Dk5mijG4PplAM8CjU6BxvhGZJ+lxuT4IMMCEMua7SmVVITaiebVbcTi7x61WsuOaKAAe+D9xaHcY0tlKvr0XaZxnyxtscbHLEc1vi6WUYlM1uLXdnMycE5DYlmsl5FfCkrtFeEG/xdbbcmBlrSl7WUwyVY0w1GPGBQFatEHf8PJhYTELNWVHR1rCMMP8Q/uiT9E/UkQHASS3XgRe+W6FqgD6pO6mOHkLXjoFVtsD\"", \""ansible_ssh_host_key_ecdsa_public\"": \""AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDB17PdGWfCxEJRNI1dj88lwt47Ah4eneGboTvG8V2Rctf7nGAOdLrB+//YrHPyb7mSiJp58/p+w8qkKy4sFQ38=\"", \""ansible_distribution_version\"": \""7.2.1511\"", \""ansible_domain\"": \""localdomain\"", \""ansible_user_shell\"": \""/bin/bash\"", \""ansible_date_time\"": {\""weekday_number\"": \""5\"", \""iso8601_basic_short\"": \""20160325T132842\"", \""tz\"": \""UTC\"", \""weeknumber\"": \""12\"", \""hour\"": \""13\"", \""year\"": \""2016\"", \""minute\"": \""28\"", \""tz_offset\"": \""+0000\"", \""month\"": \""03\"", \""epoch\"": \""1458912522\"", \""iso8601_micro\"": \""2016-03-25T13:28:42.761267Z\"", \""weekday\"": \""Friday\"", \""time\"": \""13:28:42\"", \""date\"": \""2016-03-25\"", \""iso8601\"": \""2016-03-25T13:28:42Z\"", \""day\"": \""25\"", \""iso8601_basic\"": \""20160325T132842761179\"", \""second\"": \""42\""}, \""ansible_ssh_host_key_ed25519_public\"": \""AAAAC3NzaC1lZDI1NTE5AAAAIHtYETIJC0EFGF1Mam7pYjiVxAVTNQp6yqADQoSS13bP\"", \""ansible_processor_cores\"": 1, \""ansible_virtualization_role\"": \""guest\"", \""ansible_env\"": {\""LESSOPEN\"": \""||/usr/bin/lesspipe.sh %s\"", \""SSH_CLIENT\"": \""10.0.2.2 57776 22\"", \""SELINUX_USE_CURRENT_RANGE\"": \""\"", \""LOGNAME\"": \""vagrant\"", \""USER\"": \""vagrant\"", \""PATH\"": \""/usr/local/bin:/usr/bin\"", \""HOME\"": \""/home/vagrant\"", \""LANG\"": \""en_US.UTF-8\"", \""TERM\"": \""xterm-256color\"", \""SHELL\"": \""/bin/bash\"", \""SHLVL\"": \""2\"", \""LC_ALL\"": \""en_US.UTF-8\"", \""XDG_RUNTIME_DIR\"": \""/run/user/1000\"", \""SELINUX_ROLE_REQUESTED\"": \""\"", \""QT_GRAPHICSSYSTEM_CHECKED\"": \""1\"", \""XDG_SESSION_ID\"": \""11\"", \""_\"": \""/usr/bin/python\"", \""LC_MESSAGES\"": \""en_US.UTF-8\"", \""SSH_TTY\"": \""/dev/pts/0\"", \""SELINUX_LEVEL_REQUESTED\"": \""\"", \""PWD\"": \""/home/vagrant\"", \""MAIL\"": \""/var/mail/vagrant\"", \""SSH_CONNECTION\"": \""10.0.2.2 57776 10.0.2.15 22\""}, \""ansible_processor_vcpus\"": 1, \""ansible_docker0\"": {\""macaddress\"": \""02:42:d0:a0:e3:30\"", \""interfaces\"": [], \""mtu\"": 1500, \""device\"": \""docker0\"", \""promisc\"": false, \""stp\"": false, \""ipv4\"": {\""broadcast\"": \""global\"", \""netmask\"": \""255.255.0.0\"", \""network\"": \""172.17.0.0\"", \""address\"": \""172.17.0.1\""}, \""active\"": false, \""type\"": \""bridge\"", \""id\"": \""8000.0242d0a0e330\""}, \""ansible_bios_version\"": \""VirtualBox\"", \""ansible_processor\"": [\""GenuineIntel\"", \""Intel(R) Core(TM) i7-4578U CPU @ 3.00GHz\""], \""ansible_virtualization_type\"": \""virtualbox\"", \""ansible_lo\"": {\""mtu\"": 65536, \""device\"": \""lo\"", \""promisc\"": false, \""ipv4\"": {\""broadcast\"": \""host\"", \""netmask\"": \""255.0.0.0\"", \""network\"": \""127.0.0.0\"", \""address\"": \""127.0.0.1\""}, \""ipv6\"": [{\""scope\"": \""host\"", \""prefix\"": \""128\"", \""address\"": \""::1\""}], \""active\"": true, \""type\"": \""loopback\""}, \""ansible_userspace_bits\"": \""64\"", \""ansible_architecture\"": \""x86_64\"", \""ansible_default_ipv4\"": {\""macaddress\"": \""08:00:27:07:5e:92\"", \""network\"": \""10.0.2.0\"", \""mtu\"": 1500, \""broadcast\"": \""10.0.2.255\"", \""alias\"": \""enp0s3\"", \""netmask\"": \""255.255.255.0\"", \""address\"": \""10.0.2.15\"", \""interface\"": \""enp0s3\"", \""type\"": \""ether\"", \""gateway\"": \""10.0.2.2\""}, \""ansible_swapfree_mb\"": 1015, \""ansible_default_ipv6\"": {}, \""ansible_distribution_release\"": \""Core\"", \""ansible_system_vendor\"": \""innotek GmbH\"", \""ansible_os_family\"": \""RedHat\"", \""ansible_cmdline\"": {\""BOOT_IMAGE\"": \""/vmlinuz-4.4.3-1.el7.elrepo.x86_64\"", \""quiet\"": true, \""rhgb\"": true, \""rd.lvm.lv\"": \""centos/swap\"", \""crashkernel\"": \""auto\"", \""ro\"": true, \""root\"": \""/dev/mapper/centos-root\""}, \""ansible_mounts\"": [{\""uuid\"": \""319365c2-ad4a-4a6b-986a-0b050e96624c\"", \""size_total\"": 8986296320, \""mount\"": \""/\"", \""size_available\"": 6826946560, \""fstype\"": \""xfs\"", \""device\"": \""/dev/mapper/centos-root\"", \""options\"": \""rw,seclabel,relatime,attr2,inode64,noquota\"""", ""msg"": ""MODULE FAILURE"", ""parsed"": false } ########################################################### ########################################################### Using /opt/work-repos/docker-microservice-devenv/ansible/ansible.cfg as config file <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005 `"" )'""'""'' <127.0.0.1> PUT /tmp/tmpcA5aGq TO /home/vagrant/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005/setup <127.0.0.1> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant <127.0.0.1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2233 -o 'IdentityFile=""../.vagrant/machines/gl-docker/virtualbox/private_key""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=20 -o ControlPath=/home/phil/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005/setup; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1458912523.74-172174991201005/"" > /dev/null 2>&1'""'""'' gl-docker | SUCCESS => { ""ansible_facts"": { ""ansible_all_ipv4_addresses"": [ ""172.17.0.1"", ""10.0.2.15"", ""10.100.101.111"" ], ""ansible_all_ipv6_addresses"": [ ""fe80::a00:27ff:fe07:5e92"", ""fe80::a00:27ff:fe63:1c7c"" ], ""ansible_architecture"": ""x86_64"", ""ansible_bios_date"": ""12/01/2006"", ""ansible_bios_version"": ""VirtualBox"", ""ansible_cmdline"": { ""BOOT_IMAGE"": ""/vmlinuz-4.4.3-1.el7.elrepo.x86_64"", ""crashkernel"": ""auto"", ""quiet"": true, ""rd.lvm.lv"": ""centos/swap"", ""rhgb"": true, ""ro"": true, ""root"": ""/dev/mapper/centos-root"" }, ""ansible_date_time"": { ""date"": ""2016-03-25"", ""day"": ""25"", ""epoch"": ""1458912524"", ""hour"": ""13"", ""iso8601"": ""2016-03-25T13:28:44Z"", ""iso8601_basic"": ""20160325T132844056072"", ""iso8601_basic_short"": ""20160325T132844"", ""iso8601_micro"": ""2016-03-25T13:28:44.056144Z"", ""minute"": ""28"", ""month"": ""03"", ""second"": ""44"", ""time"": ""13:28:44"", ""tz"": ""UTC"", ""tz_offset"": ""+0000"", ""weekday"": ""Friday"", ""weekday_number"": ""5"", ""weeknumber"": ""12"", ""year"": ""2016"" }, ""ansible_default_ipv4"": { ""address"": ""10.0.2.15"", ""alias"": ""enp0s3"", ""broadcast"": ""10.0.2.255"", ""gateway"": ""10.0.2.2"", ""interface"": ""enp0s3"", ""macaddress"": ""08:00:27:07:5e:92"", ""mtu"": 1500, ""netmask"": ""255.255.255.0"", ""network"": ""10.0.2.0"", ""type"": ""ether"" }, ""ansible_default_ipv6"": {}, ""ansible_devices"": { ""sda"": { ""holders"": [], ""host"": ""IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)"", ""model"": ""VBOX HARDDISK"", ""partitions"": { ""sda1"": { ""sectors"": ""1024000"", ""sectorsize"": 512, ""size"": ""500.00 MB"", ""start"": ""2048"" }, ""sda2"": { ""sectors"": ""19740672"", ""sectorsize"": 512, ""size"": ""9.41 GB"", ""start"": ""1026048"" } }, ""removable"": ""0"", ""rotational"": ""1"", ""scheduler_mode"": ""deadline"", ""sectors"": ""20766720"", ""sectorsize"": ""512"", ""size"": ""9.90 GB"", ""support_discard"": ""0"", ""vendor"": ""ATA"" } }, ""ansible_distribution"": ""CentOS"", ""ansible_distribution_major_version"": ""7"", ""ansible_distribution_release"": ""Core"", ""ansible_distribution_version"": ""7.2.1511"", ""ansible_dns"": { ""nameservers"": [ ""10.0.2.3"" ] }, ""ansible_docker0"": { ""active"": false, ""device"": ""docker0"", ""id"": ""8000.0242d0a0e330"", ""interfaces"": [], ""ipv4"": { ""address"": ""172.17.0.1"", ""broadcast"": ""global"", ""netmask"": ""255.255.0.0"", ""network"": ""172.17.0.0"" }, ""macaddress"": ""02:42:d0:a0:e3:30"", ""mtu"": 1500, ""promisc"": false, ""stp"": false, ""type"": ""bridge"" }, ""ansible_domain"": ""localdomain"", ""ansible_enp0s3"": { ""active"": true, ""device"": ""enp0s3"", ""ipv4"": { ""address"": ""10.0.2.15"", ""broadcast"": ""10.0.2.255"", ""netmask"": ""255.255.255.0"", ""network"": ""10.0.2.0"" }, ""ipv6"": [ { ""address"": ""fe80::a00:27ff:fe07:5e92"", ""prefix"": ""64"", ""scope"": ""link"" } ], ""macaddress"": ""08:00:27:07:5e:92"", ""module"": ""e1000"", ""mtu"": 1500, ""pciid"": ""0000:00:03.0"", ""promisc"": false, ""type"": ""ether"" }, ""ansible_enp0s8"": { ""active"": true, ""device"": ""enp0s8"", ""ipv4"": { ""address"": ""10.100.101.111"", ""broadcast"": ""10.100.101.255"", ""netmask"": ""255.255.255.0"", ""network"": ""10.100.101.0"" }, ""ipv6"": [ { ""address"": ""fe80::a00:27ff:fe63:1c7c"", ""prefix"": ""64"", ""scope"": ""link"" } ], ""macaddress"": ""08:00:27:63:1c:7c"", ""module"": ""e1000"", ""mtu"": 1500, ""pciid"": ""0000:00:08.0"", ""promisc"": false, ""type"": ""ether"" }, ""ansible_env"": { ""HOME"": ""/home/vagrant"", ""LANG"": ""en_US.UTF-8"", ""LC_ALL"": ""en_US.UTF-8"", ""LC_MESSAGES"": ""en_US.UTF-8"", ""LESSOPEN"": ""||/usr/bin/lesspipe.sh %s"", ""LOGNAME"": ""vagrant"", ""MAIL"": ""/var/mail/vagrant"", ""PATH"": ""/usr/local/bin:/usr/bin"", ""PWD"": ""/home/vagrant"", ""QT_GRAPHICSSYSTEM_CHECKED"": ""1"", ""SELINUX_LEVEL_REQUESTED"": """", ""SELINUX_ROLE_REQUESTED"": """", ""SELINUX_USE_CURRENT_RANGE"": """", ""SHELL"": ""/bin/bash"", ""SHLVL"": ""2"", ""SSH_CLIENT"": ""10.0.2.2 57776 22"", ""SSH_CONNECTION"": ""10.0.2.2 57776 10.0.2.15 22"", ""SSH_TTY"": ""/dev/pts/0"", ""TERM"": ""xterm-256color"", ""USER"": ""vagrant"", ""XDG_RUNTIME_DIR"": ""/run/user/1000"", ""XDG_SESSION_ID"": ""11"", ""_"": ""/usr/bin/python"" }, ""ansible_fips"": false, ""ansible_form_factor"": ""Other"", ""ansible_fqdn"": ""localhost.localdomain"", ""ansible_hostname"": ""gl-docker"", ""ansible_interfaces"": [ ""lo"", ""docker0"", ""enp0s3"", ""enp0s8"" ], ""ansible_kernel"": ""4.4.3-1.el7.elrepo.x86_64"", ""ansible_lo"": { ""active"": true, ""device"": ""lo"", ""ipv4"": { ""address"": ""127.0.0.1"", ""broadcast"": ""host"", ""netmask"": ""255.0.0.0"", ""network"": ""127.0.0.0"" }, ""ipv6"": [ { ""address"": ""::1"", ""prefix"": ""128"", ""scope"": ""host"" } ], ""mtu"": 65536, ""promisc"": false, ""type"": ""loopback"" }, ""ansible_lsb"": { ""codename"": ""Core"", ""description"": ""CentOS Linux release 7.2.1511 (Core)"", ""id"": ""CentOS"", ""major_release"": ""7"", ""release"": ""7.2.1511"" }, ""ansible_machine"": ""x86_64"", ""ansible_machine_id"": ""69c2e9ff6b3b4594b1a26db25287da79"", ""ansible_memfree_mb"": 1629, ""ansible_memory_mb"": { ""nocache"": { ""free"": 1790, ""used"": 210 }, ""real"": { ""free"": 1629, ""total"": 2000, ""used"": 371 }, ""swap"": { ""cached"": 0, ""free"": 1015, ""total"": 1015, ""used"": 0 } }, ""ansible_memtotal_mb"": 2000, ""ansible_mounts"": [ { ""device"": ""/dev/mapper/centos-root"", ""fstype"": ""xfs"", ""mount"": ""/"", ""options"": ""rw,seclabel,relatime,attr2,inode64,noquota"", ""size_available"": 6826946560, ""size_total"": 8986296320, ""uuid"": ""319365c2-ad4a-4a6b-986a-0b050e96624c"" }, { ""device"": ""/dev/sda1"", ""fstype"": ""xfs"", ""mount"": ""/boot"", ""options"": ""rw,seclabel,relatime,attr2,inode64,noquota"", ""size_available"": 320561152, ""size_total"": 520794112, ""uuid"": ""9a2e2b29-a60f-484b-be03-30df74cc87a4"" } ], ""ansible_nodename"": ""gl-docker"", ""ansible_os_family"": ""RedHat"", ""ansible_pkg_mgr"": ""yum"", ""ansible_processor"": [ ""GenuineIntel"", ""Intel(R) Core(TM) i7-4578U CPU @ 3.00GHz"" ], ""ansible_processor_cores"": 1, ""ansible_processor_count"": 1, ""ansible_processor_threads_per_core"": 1, ""ansible_processor_vcpus"": 1, ""ansible_product_name"": ""VirtualBox"", ""ansible_product_serial"": ""NA"", ""ansible_product_uuid"": ""NA"", ""ansible_product_version"": ""1.2"", ""ansible_python_version"": ""2.7.5"", ""ansible_selinux"": { ""config_mode"": ""permissive"", ""mode"": ""permissive"", ""policyvers"": 30, ""status"": ""enabled"", ""type"": ""targeted"" }, ""ansible_service_mgr"": ""systemd"", ""ansible_ssh_host_key_ecdsa_public"": ""AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDB17PdGWfCxEJRNI1dj88lwt47Ah4eneGboTvG8V2Rctf7nGAOdLrB+//YrHPyb7mSiJp58/p+w8qkKy4sFQ38="", ""ansible_ssh_host_key_ed25519_public"": ""AAAAC3NzaC1lZDI1NTE5AAAAIHtYETIJC0EFGF1Mam7pYjiVxAVTNQp6yqADQoSS13bP"", ""ansible_ssh_host_key_rsa_public"": ""AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ8cfLKMpYB/mrhS3DuBTB6cTryHUzLHJ1gIk9Ro+QhbidvnAb9Br4182qytb2fmSB0kn+I4WbZBSDj2Kv9uicL8BVEyoMZtAsDZ0cHBsN5Su+Dk5mijG4PplAM8CjU6BxvhGZJ+lxuT4IMMCEMua7SmVVITaiebVbcTi7x61WsuOaKAAe+D9xaHcY0tlKvr0XaZxnyxtscbHLEc1vi6WUYlM1uLXdnMycE5DYlmsl5FfCkrtFeEG/xdbbcmBlrSl7WUwyVY0w1GPGBQFatEHf8PJhYTELNWVHR1rCMMP8Q/uiT9E/UkQHASS3XgRe+W6FqgD6pO6mOHkLXjoFVtsD"", ""ansible_swapfree_mb"": 1015, ""ansible_swaptotal_mb"": 1015, ""ansible_system"": ""Linux"", ""ansible_system_vendor"": ""innotek GmbH"", ""ansible_uptime_seconds"": 2173, ""ansible_user_dir"": ""/home/vagrant"", ""ansible_user_gecos"": ""vagrant"", ""ansible_user_gid"": 1000, ""ansible_user_id"": ""vagrant"", ""ansible_user_shell"": ""/bin/bash"", ""ansible_user_uid"": 1000, ""ansible_userspace_architecture"": ""x86_64"", ""ansible_userspace_bits"": ""64"", ""ansible_virtualization_role"": ""guest"", ""ansible_virtualization_type"": ""virtualbox"", ""module_setup"": true }, ""changed"": false, ""invocation"": { ""module_args"": { ""fact_path"": ""facts"", ""filter"": ""*"" }, ""module_name"": ""setup"" } } ``` ",1,fact gathering randomly fails on a vm issue type bug report component name setup module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts host key checking false nocows fact caching jsonfile fact caching connection facts ansible managed ansible managed file modified on y m d by uid on host scp if ssh true timeout pipelining true ssh args o controlmaster auto o controlpersist os environment host ubuntu linux laptappy generic ubuntu smp thu mar utc gnu linux vm centos linux gl docker elrepo smp thu feb est gnu linux summary trying to gather facts from a newly spun up vagrant box fails i am not using the ansible vagrant provisioner i am running ansible after the fact i can connect all night and day with the connection string that ansible verbose output uses after a successful fact gathering i can run ansible commands all night and day against my node s fact gathering is the only thing that chokes randomly steps to reproduce the command i run cd ansible ansible vagrant m setup a fact path facts vvvv cd inventory regardless if i use the private key or password the results are the same gl docker ansible connection ssh ansible host ansible port ansible user vagrant ansible ssh private key file vagrant machines gl docker virtualbox private key gl docker i can run this command and everything appears to exit cleanly i copy pasted this string from the ansible verbose output for i in do ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile vagrant machines gl docker virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home phil ansible cp ansible ssh h p r tt echo e nhey im in n exit echo echo i do i ve also set the following variable to successfully and repeatedly run the setup module on the host itself export ansible keep remote files expected results i expect fact gathering to return successfully each and every time actual results fact gathering randomly fails here are two subsequent fact gathering runs using opt work repos docker microservice devenv ansible ansible cfg as config file establish ssh connection for user vagrant ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile vagrant machines gl docker virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home phil ansible cp ansible ssh h p r tt bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp tmpwzywqd to home vagrant ansible tmp ansible tmp setup ssh exec sftp b c o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile vagrant machines gl docker virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home phil ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile vagrant machines gl docker virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home phil ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp setup rm rf home vagrant ansible tmp ansible tmp dev null gl docker failed changed false failed true invocation module name setup module stderr module stdout invocation module args filter fact path facts changed false ansible verbose override true ansible facts ansible product serial na ansible form factor other ansible product version ansible fips false ansible service mgr systemd ansible user id vagrant module setup true ansible memtotal mb ansible ssh host key rsa public ansible ssh host key ecdsa public p ansible distribution version ansible domain localdomain ansible user shell bin bash ansible date time weekday number basic short tz utc weeknumber hour year minute tz offset month epoch micro weekday friday time date day basic second ansible ssh host key public ansible processor cores ansible virtualization role guest ansible env lessopen usr bin lesspipe sh s ssh client selinux use current range logname vagrant user vagrant path usr local bin usr bin home home vagrant lang en us utf term xterm shell bin bash shlvl lc all en us utf xdg runtime dir run user selinux role requested qt graphicssystem checked xdg session id usr bin python lc messages en us utf ssh tty dev pts selinux level requested pwd home vagrant mail var mail vagrant ssh connection ansible processor vcpus ansible macaddress interfaces mtu device promisc false stp false broadcast global netmask network address active false type bridge id ansible bios version virtualbox ansible processor ansible virtualization type virtualbox ansible lo mtu device lo promisc false broadcast host netmask network address active true type loopback ansible userspace bits ansible architecture ansible default macaddress network mtu broadcast alias netmask address interface type ether gateway ansible swapfree mb ansible default ansible distribution release core ansible system vendor innotek gmbh ansible os family redhat ansible cmdline boot image vmlinuz elrepo quiet true rhgb true rd lvm lv centos swap crashkernel auto ro true root dev mapper centos root ansible mounts uuid size total mount size available fstype xfs device dev mapper centos root options rw seclabel relatime noquota msg module failure parsed false using opt work repos docker microservice devenv ansible ansible cfg as config file establish ssh connection for user vagrant ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile vagrant machines gl docker virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home phil ansible cp ansible ssh h p r tt bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp to home vagrant ansible tmp ansible tmp setup ssh exec sftp b c o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile vagrant machines gl docker virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home phil ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile vagrant machines gl docker virtualbox private key o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home phil ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp setup rm rf home vagrant ansible tmp ansible tmp dev null gl docker success ansible facts ansible all addresses ansible all addresses ansible architecture ansible bios date ansible bios version virtualbox ansible cmdline boot image vmlinuz elrepo crashkernel auto quiet true rd lvm lv centos swap rhgb true ro true root dev mapper centos root ansible date time date day epoch hour basic basic short micro minute month second time tz utc tz offset weekday friday weekday number weeknumber year ansible default address alias broadcast gateway interface macaddress mtu netmask network type ether ansible default ansible devices sda holders host ide interface intel corporation eb mb ide rev model vbox harddisk partitions sectors sectorsize size mb start sectors sectorsize size gb start removable rotational scheduler mode deadline sectors sectorsize size gb support discard vendor ata ansible distribution centos ansible distribution major version ansible distribution release core ansible distribution version ansible dns nameservers ansible active false device id interfaces address broadcast global netmask network macaddress mtu promisc false stp false type bridge ansible domain localdomain ansible active true device address broadcast netmask network address prefix scope link macaddress module mtu pciid promisc false type ether ansible active true device address broadcast netmask network address prefix scope link macaddress module mtu pciid promisc false type ether ansible env home home vagrant lang en us utf lc all en us utf lc messages en us utf lessopen usr bin lesspipe sh s logname vagrant mail var mail vagrant path usr local bin usr bin pwd home vagrant qt graphicssystem checked selinux level requested selinux role requested selinux use current range shell bin bash shlvl ssh client ssh connection ssh tty dev pts term xterm user vagrant xdg runtime dir run user xdg session id usr bin python ansible fips false ansible form factor other ansible fqdn localhost localdomain ansible hostname gl docker ansible interfaces lo ansible kernel elrepo ansible lo active true device lo address broadcast host netmask network address prefix scope host mtu promisc false type loopback ansible lsb codename core description centos linux release core id centos major release release ansible machine ansible machine id ansible memfree mb ansible memory mb nocache free used real free total used swap cached free total used ansible memtotal mb ansible mounts device dev mapper centos root fstype xfs mount options rw seclabel relatime noquota size available size total uuid device dev fstype xfs mount boot options rw seclabel relatime noquota size available size total uuid ansible nodename gl docker ansible os family redhat ansible pkg mgr yum ansible processor genuineintel intel r core tm cpu ansible processor cores ansible processor count ansible processor threads per core ansible processor vcpus ansible product name virtualbox ansible product serial na ansible product uuid na ansible product version ansible python version ansible selinux config mode permissive mode permissive policyvers status enabled type targeted ansible service mgr systemd ansible ssh host key ecdsa public p ansible ssh host key public ansible ssh host key rsa public ansible swapfree mb ansible swaptotal mb ansible system linux ansible system vendor innotek gmbh ansible uptime seconds ansible user dir home vagrant ansible user gecos vagrant ansible user gid ansible user id vagrant ansible user shell bin bash ansible user uid ansible userspace architecture ansible userspace bits ansible virtualization role guest ansible virtualization type virtualbox module setup true changed false invocation module args fact path facts filter module name setup ,1 1157,5047424283.0,IssuesEvent,2016-12-20 09:23:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container with empty links list always restarts,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION *none* ##### OS / ENVIRONMENT Ubuntu 16.04 ##### SUMMARY When using `links: []`, the container is always restarted. ##### STEPS TO REPRODUCE Run this playbook twice: ``` - hosts: localhost tasks: - docker_container: name: test image: alpine links: [] command: sleep 10000 ``` ##### EXPECTED RESULTS The container is started once. It is not restarted when running the second time. ##### ACTUAL RESULTS The container is restarted on the second run. When running with `-vvv --diff`: ``` TASK [docker_container] ******************************************************** [...] changed: [localhost] => { ""ansible_facts"": {}, ""changed"": true, ""diff"": { ""differences"": [ { ""expected_links"": { ""container"": null, ""parameter"": [] } } ] }, ""invocation"": { [...] ``` The problem seems to be that the modules compares an empty list with `None`. From `docker.log` (when enabled): ``` check differences expected_links [] vs None primitive compare: expected_links [...] differences [ { ""expected_links"": { ""container"": null, ""parameter"": [] } } ] ```",True,"docker_container with empty links list always restarts - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION *none* ##### OS / ENVIRONMENT Ubuntu 16.04 ##### SUMMARY When using `links: []`, the container is always restarted. ##### STEPS TO REPRODUCE Run this playbook twice: ``` - hosts: localhost tasks: - docker_container: name: test image: alpine links: [] command: sleep 10000 ``` ##### EXPECTED RESULTS The container is started once. It is not restarted when running the second time. ##### ACTUAL RESULTS The container is restarted on the second run. When running with `-vvv --diff`: ``` TASK [docker_container] ******************************************************** [...] changed: [localhost] => { ""ansible_facts"": {}, ""changed"": true, ""diff"": { ""differences"": [ { ""expected_links"": { ""container"": null, ""parameter"": [] } } ] }, ""invocation"": { [...] ``` The problem seems to be that the modules compares an empty list with `None`. From `docker.log` (when enabled): ``` check differences expected_links [] vs None primitive compare: expected_links [...] differences [ { ""expected_links"": { ""container"": null, ""parameter"": [] } } ] ```",1,docker container with empty links list always restarts issue type bug report component name docker container ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none os environment ubuntu summary when using links the container is always restarted steps to reproduce run this playbook twice hosts localhost tasks docker container name test image alpine links command sleep expected results the container is started once it is not restarted when running the second time actual results the container is restarted on the second run when running with vvv diff task changed ansible facts changed true diff differences expected links container null parameter invocation the problem seems to be that the modules compares an empty list with none from docker log when enabled check differences expected links vs none primitive compare expected links differences expected links container null parameter ,1 1887,6577527227.0,IssuesEvent,2017-09-12 01:32:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git module tries to outsmart git and ssh and fails,affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Plugin Name: git ##### Ansible Version: ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Environment: from ArchLinux too Debian Jessie ##### Summary: When trying to clone a repo using an ssh ""alias"" everything fails. ##### Steps To Reproduce: playbook: ``` - git: repo: ""git@A-github.internal:myrepo.git"" dest: ~/myrepo version: ""master"" ``` ~/.ssh/config: ``` Host A-github.internal IdentityFile ~/.ssh/deploykey_A HostName github.internal ``` hostkey for github.internal is present `git clone ""git@A-github.internal:myrepo.git""` works ##### Expected Results: the git repo gets cloned. ##### Actual Results: error: ``` A-github.internal has an unknown hostkey. Set accept_hostkey to True or manually add the hostkey prior to running the git module ``` ##### Attempted workaround: If i set accept_hostkey to true, then it fails trying to resolve A-github.internal (same if i try to add a hostkey with that name using ""known_hosts"") ",True,"git module tries to outsmart git and ssh and fails - ##### Issue Type: - Bug Report ##### Plugin Name: git ##### Ansible Version: ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Environment: from ArchLinux too Debian Jessie ##### Summary: When trying to clone a repo using an ssh ""alias"" everything fails. ##### Steps To Reproduce: playbook: ``` - git: repo: ""git@A-github.internal:myrepo.git"" dest: ~/myrepo version: ""master"" ``` ~/.ssh/config: ``` Host A-github.internal IdentityFile ~/.ssh/deploykey_A HostName github.internal ``` hostkey for github.internal is present `git clone ""git@A-github.internal:myrepo.git""` works ##### Expected Results: the git repo gets cloned. ##### Actual Results: error: ``` A-github.internal has an unknown hostkey. Set accept_hostkey to True or manually add the hostkey prior to running the git module ``` ##### Attempted workaround: If i set accept_hostkey to true, then it fails trying to resolve A-github.internal (same if i try to add a hostkey with that name using ""known_hosts"") ",1,git module tries to outsmart git and ssh and fails issue type bug report plugin name git ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides environment from archlinux too debian jessie summary when trying to clone a repo using an ssh alias everything fails steps to reproduce for bugs please show exactly how to reproduce the problem for new features show how the feature would be used playbook git repo git a github internal myrepo git dest myrepo version master ssh config host a github internal identityfile ssh deploykey a hostname github internal hostkey for github internal is present git clone git a github internal myrepo git works expected results the git repo gets cloned actual results error a github internal has an unknown hostkey set accept hostkey to true or manually add the hostkey prior to running the git module attempted workaround if i set accept hostkey to true then it fails trying to resolve a github internal same if i try to add a hostkey with that name using known hosts ,1 1843,6577379701.0,IssuesEvent,2017-09-12 00:30:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound,affects_2.0 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc module ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/arlindo/projects/ts/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ansible is running on Ubuntu 14.04 managing AWS ##### SUMMARY ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound ##### STEPS TO REPRODUCE Issue is sporadic, therefore can't reproduce at will. ``` - name: VPC | Creating an AWS VPC inside mentioned Region local_action: module: ec2_vpc region: ""{{ vpc_region }}"" state: present cidr_block: ""{{ vpc_cidr_block }}"" resource_tags: { ""Name"":""{{ vpc_name }}"" } subnets: ""{{ vpc_subnets }}"" internet_gateway: yes route_tables: ""{{ public_subnet_rt }}"" register: vpc ``` ##### EXPECTED RESULTS For a new VPC to be created in AWS ##### ACTUAL RESULTS ``` TASK [VPC | Creating an AWS VPC inside mentioned Region] *********************** task path: /home/arlindo/projects/ts/playbooks/aws/tasks/vpc.yml:12 ESTABLISH LOCAL CONNECTION FOR USER: arlindo localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `"" )' localhost PUT /tmp/tmpJj1mmV TO /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc localhost EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc; rm -rf ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc"", line 2944, in main() File ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc"", line 731, in main (vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn) File ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc"", line 387, in create_vpc vpc_conn.create_tags(vpc.id, new_tags) File ""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py"", line 4219, in create_tags return self.get_status('CreateTags', params, verb='POST') File ""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py"", line 1227, in get_status raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request InvalidVpcID.NotFoundThe vpc ID 'vpc-7a82401d' does not exista0ffcb87-f56e-495f-94ea-893746b8dba8 fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""ec2_vpc""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\"", line 2944, in \n main()\n File \""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\"", line 731, in main\n (vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn)\n File \""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\"", line 387, in create_vpc\n vpc_conn.create_tags(vpc.id, new_tags)\n File \""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py\"", line 4219, in create_tags\n return self.get_status('CreateTags', params, verb='POST')\n File \""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py\"", line 1227, in get_status\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request\n\nInvalidVpcID.NotFoundThe vpc ID 'vpc-7a82401d' does not exista0ffcb87-f56e-495f-94ea-893746b8dba8\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} to retry, use: --limit @playbooks/provisionaws.retry ``` ",True,"ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc module ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/arlindo/projects/ts/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ansible is running on Ubuntu 14.04 managing AWS ##### SUMMARY ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound ##### STEPS TO REPRODUCE Issue is sporadic, therefore can't reproduce at will. ``` - name: VPC | Creating an AWS VPC inside mentioned Region local_action: module: ec2_vpc region: ""{{ vpc_region }}"" state: present cidr_block: ""{{ vpc_cidr_block }}"" resource_tags: { ""Name"":""{{ vpc_name }}"" } subnets: ""{{ vpc_subnets }}"" internet_gateway: yes route_tables: ""{{ public_subnet_rt }}"" register: vpc ``` ##### EXPECTED RESULTS For a new VPC to be created in AWS ##### ACTUAL RESULTS ``` TASK [VPC | Creating an AWS VPC inside mentioned Region] *********************** task path: /home/arlindo/projects/ts/playbooks/aws/tasks/vpc.yml:12 ESTABLISH LOCAL CONNECTION FOR USER: arlindo localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `"" )' localhost PUT /tmp/tmpJj1mmV TO /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc localhost EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc; rm -rf ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc"", line 2944, in main() File ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc"", line 731, in main (vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn) File ""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc"", line 387, in create_vpc vpc_conn.create_tags(vpc.id, new_tags) File ""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py"", line 4219, in create_tags return self.get_status('CreateTags', params, verb='POST') File ""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py"", line 1227, in get_status raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request InvalidVpcID.NotFoundThe vpc ID 'vpc-7a82401d' does not exista0ffcb87-f56e-495f-94ea-893746b8dba8 fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""ec2_vpc""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\"", line 2944, in \n main()\n File \""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\"", line 731, in main\n (vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn)\n File \""/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\"", line 387, in create_vpc\n vpc_conn.create_tags(vpc.id, new_tags)\n File \""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py\"", line 4219, in create_tags\n return self.get_status('CreateTags', params, verb='POST')\n File \""/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py\"", line 1227, in get_status\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request\n\nInvalidVpcID.NotFoundThe vpc ID 'vpc-7a82401d' does not exista0ffcb87-f56e-495f-94ea-893746b8dba8\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} to retry, use: --limit @playbooks/provisionaws.retry ``` ",1, vpc module vpc creation fails sometimes with invalidvpcid notfound issue type bug report component name vpc module ansible version ansible config file home arlindo projects ts ansible cfg configured module search path default w o overrides configuration os environment ansible is running on ubuntu managing aws summary vpc module vpc creation fails sometimes with invalidvpcid notfound steps to reproduce issue is sporadic therefore can t reproduce at will name vpc creating an aws vpc inside mentioned region local action module vpc region vpc region state present cidr block vpc cidr block resource tags name vpc name subnets vpc subnets internet gateway yes route tables public subnet rt register vpc expected results for a new vpc to be created in aws actual results task task path home arlindo projects ts playbooks aws tasks vpc yml establish local connection for user arlindo localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put tmp to home arlindo ansible tmp ansible tmp vpc localhost exec bin sh c lang en ca utf lc all en ca utf lc messages en ca utf usr bin python home arlindo ansible tmp ansible tmp vpc rm rf home arlindo ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file home arlindo ansible tmp ansible tmp vpc line in main file home arlindo ansible tmp ansible tmp vpc line in main vpc dict new vpc id subnets changed igw id changed create vpc module vpc conn file home arlindo ansible tmp ansible tmp vpc line in create vpc vpc conn create tags vpc id new tags file usr local lib dist packages boto egg boto connection py line in create tags return self get status createtags params verb post file usr local lib dist packages boto egg boto connection py line in get status raise self responseerror response status response reason body boto exception bad request invalidvpcid notfound the vpc id vpc does not exist fatal failed changed false failed true invocation module name vpc module stderr traceback most recent call last n file home arlindo ansible tmp ansible tmp vpc line in n main n file home arlindo ansible tmp ansible tmp vpc line in main n vpc dict new vpc id subnets changed igw id changed create vpc module vpc conn n file home arlindo ansible tmp ansible tmp vpc line in create vpc n vpc conn create tags vpc id new tags n file usr local lib dist packages boto egg boto connection py line in create tags n return self get status createtags params verb post n file usr local lib dist packages boto egg boto connection py line in get status n raise self responseerror response status response reason body nboto exception bad request n n invalidvpcid notfound the vpc id vpc does not exist n module stdout msg module failure parsed false to retry use limit playbooks provisionaws retry ,1 1114,4988947314.0,IssuesEvent,2016-12-08 10:09:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,find_module: allow regex in paths,affects_2.1 feature_idea waiting_on_maintainer,"Issue Type: Feature Idea Component Name: find module Ansible Version: ansible 2.1.0 (devel 210cf06d9a) last updated 2016/01/04 11:21:26 (GMT +200) Environment: Ubuntu 15.04 Example: To replace a simple ``` find /home/*/foo/bar/ ``` with the new ""find"" module you have to use a lot of nested ""find"" calls (one for each sub folder level). It would be great if this could be done with one simple call. At the moment this is the result: ``` ""msg"": ""/home/*/foo/bar/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n"" ``` ",True,"find_module: allow regex in paths - Issue Type: Feature Idea Component Name: find module Ansible Version: ansible 2.1.0 (devel 210cf06d9a) last updated 2016/01/04 11:21:26 (GMT +200) Environment: Ubuntu 15.04 Example: To replace a simple ``` find /home/*/foo/bar/ ``` with the new ""find"" module you have to use a lot of nested ""find"" calls (one for each sub folder level). It would be great if this could be done with one simple call. At the moment this is the result: ``` ""msg"": ""/home/*/foo/bar/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n"" ``` ",1,find module allow regex in paths issue type feature idea component name find module ansible version ansible devel last updated gmt environment ubuntu example to replace a simple find home foo bar with the new find module you have to use a lot of nested find calls one for each sub folder level it would be great if this could be done with one simple call at the moment this is the result msg home foo bar was skipped as it does not seem to be a valid directory or it cannot be accessed n ,1 753,4351730564.0,IssuesEvent,2016-08-01 01:10:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure_rm_virtualmachine module fails creating a virtualmachine when the name of vm contains upper-case.,azure bug_report cloud easyfix waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_virtualmachine ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 221520cbad) last updated 2016/07/13 15:32:29 (GMT +900) lib/ansible/modules/core: (detached HEAD db8af4c5af) last updated 2016/07/13 15:32:38 (GMT +900) lib/ansible/modules/extras: (detached HEAD 482b1a640e) last updated 2016/07/13 15:32:38 (GMT +900) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Creating a new azure virtualmachine with upper-cased letter fails without setting a specific storage account because `AzureRMVirtualMachine.create_default_storage_account` try to create a storage account with upper-case. As described in [this document](https://msdn.microsoft.com/en-us/library/azure/hh264518.aspx), the storage account name can only use numbers and lower-case letters. ##### STEPS TO REPRODUCE Here is a sample task. ```yaml - azure_rm_virtualmachine: name: nameWithUpper resource_group: Testing vm_size: Standard_D1 public_ip_allocation_method: Dynamic admin_username: AdminUserName admin_password: AdminP@ssw0rd open_ports: - 3389 - 5986 os_type: Windows image: publisher: MicrosoftWindowsServer offer: WindowsServer sku: Windows-Server-Technical-Preview version: latest ``` ##### EXPECTED RESULTS The module should convert the vm name to lowercase before trying to create a default storage account. ##### ACTUAL RESULTS Creating storage account always fails as below. ``` TASK [azure_rm_virtualmachine] ************************************************* fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failed to create a unique storage account name for nameWithUpper. Try using a different VM name.""} ``` ",True,"azure_rm_virtualmachine module fails creating a virtualmachine when the name of vm contains upper-case. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_virtualmachine ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 221520cbad) last updated 2016/07/13 15:32:29 (GMT +900) lib/ansible/modules/core: (detached HEAD db8af4c5af) last updated 2016/07/13 15:32:38 (GMT +900) lib/ansible/modules/extras: (detached HEAD 482b1a640e) last updated 2016/07/13 15:32:38 (GMT +900) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Creating a new azure virtualmachine with upper-cased letter fails without setting a specific storage account because `AzureRMVirtualMachine.create_default_storage_account` try to create a storage account with upper-case. As described in [this document](https://msdn.microsoft.com/en-us/library/azure/hh264518.aspx), the storage account name can only use numbers and lower-case letters. ##### STEPS TO REPRODUCE Here is a sample task. ```yaml - azure_rm_virtualmachine: name: nameWithUpper resource_group: Testing vm_size: Standard_D1 public_ip_allocation_method: Dynamic admin_username: AdminUserName admin_password: AdminP@ssw0rd open_ports: - 3389 - 5986 os_type: Windows image: publisher: MicrosoftWindowsServer offer: WindowsServer sku: Windows-Server-Technical-Preview version: latest ``` ##### EXPECTED RESULTS The module should convert the vm name to lowercase before trying to create a default storage account. ##### ACTUAL RESULTS Creating storage account always fails as below. ``` TASK [azure_rm_virtualmachine] ************************************************* fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Failed to create a unique storage account name for nameWithUpper. Try using a different VM name.""} ``` ",1,azure rm virtualmachine module fails creating a virtualmachine when the name of vm contains upper case issue type bug report component name azure rm virtualmachine ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration n a os environment n a summary creating a new azure virtualmachine with upper cased letter fails without setting a specific storage account because azurermvirtualmachine create default storage account try to create a storage account with upper case as described in the storage account name can only use numbers and lower case letters steps to reproduce here is a sample task yaml azure rm virtualmachine name namewithupper resource group testing vm size standard public ip allocation method dynamic admin username adminusername admin password adminp open ports os type windows image publisher microsoftwindowsserver offer windowsserver sku windows server technical preview version latest expected results the module should convert the vm name to lowercase before trying to create a default storage account actual results creating storage account always fails as below task fatal failed changed false failed true msg failed to create a unique storage account name for namewithupper try using a different vm name ,1 1540,6572229716.0,IssuesEvent,2017-09-11 00:20:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"yum module should support state ""downloaded""",affects_2.1 feature_idea waiting_on_maintainer,"Hi, guys! In some of our clients, the internet connection isn't that great and we have been forced to split our installation using two playbooks: one for preparing the environment and the other for installing effectively. In the prepare playbook we do things like downloading files to the server. Typically, we run it some hours before the actual installation. Specifically, using yum module we would like to have the rpm packages present in the server, but not installed. Manually we are able to do this with the yumdownloader command. Anyway, thanks for the great tool you guys maintain! Cheers, -- Rodrigo Couto ",True,"yum module should support state ""downloaded"" - Hi, guys! In some of our clients, the internet connection isn't that great and we have been forced to split our installation using two playbooks: one for preparing the environment and the other for installing effectively. In the prepare playbook we do things like downloading files to the server. Typically, we run it some hours before the actual installation. Specifically, using yum module we would like to have the rpm packages present in the server, but not installed. Manually we are able to do this with the yumdownloader command. Anyway, thanks for the great tool you guys maintain! Cheers, -- Rodrigo Couto ",1,yum module should support state downloaded hi guys in some of our clients the internet connection isn t that great and we have been forced to split our installation using two playbooks one for preparing the environment and the other for installing effectively in the prepare playbook we do things like downloading files to the server typically we run it some hours before the actual installation specifically using yum module we would like to have the rpm packages present in the server but not installed manually we are able to do this with the yumdownloader command anyway thanks for the great tool you guys maintain cheers rodrigo couto ,1 1058,4875072483.0,IssuesEvent,2016-11-16 08:16:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git update fails every other time,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION The default which shipped with Fedora release 24 (Twenty Four). ##### OS / ENVIRONMENT N/A ##### SUMMARY Git clone fails every other time, with this error message ``` TASK [clone icons] ************************************************************* fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/home/l33tname/dotfiles/setup.retry ``` ##### STEPS TO REPRODUCE ``` - hosts: local tasks: - name: clone icons git: repo=https://github.com/jcubic/Clarity.git force=yes dest=/home/l33tname/.icons/Clarity - name: config icons command: ./configure chdir=/home/l33tname/.icons/Clarity ``` ##### EXPECTED RESULTS I expect that it works everytime not only every second time. ##### ACTUAL RESULTS ``` TASK [clone icons] ************************************************************* task path: /home/l33tname/dotfiles/git_wtf.yaml:4 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/source_control/git.py <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" && echo ansible-tmp-1479049433.89-122334128883345=""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /tmp/tmpPx4qzT TO /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py <127.0.0.1> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'chmod u+x /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/ /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py && sleep 0'""'""'' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C -tt 127.0.0.1 '/bin/sh -c '""'""'/usr/bin/python /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py; rm -rf ""/home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/"" > /dev/null 2>&1 && sleep 0'""'""'' fatal: [127.0.0.1]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""module_stderr"": ""OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016\r\ndebug1: Reading configuration data /home/l33tname/.ssh/config\r\ndebug1: /home/l33tname/.ssh/config line 1: Applying options for 127.0.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 21589\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",True,"git update fails every other time - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION The default which shipped with Fedora release 24 (Twenty Four). ##### OS / ENVIRONMENT N/A ##### SUMMARY Git clone fails every other time, with this error message ``` TASK [clone icons] ************************************************************* fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/home/l33tname/dotfiles/setup.retry ``` ##### STEPS TO REPRODUCE ``` - hosts: local tasks: - name: clone icons git: repo=https://github.com/jcubic/Clarity.git force=yes dest=/home/l33tname/.icons/Clarity - name: config icons command: ./configure chdir=/home/l33tname/.icons/Clarity ``` ##### EXPECTED RESULTS I expect that it works everytime not only every second time. ##### ACTUAL RESULTS ``` TASK [clone icons] ************************************************************* task path: /home/l33tname/dotfiles/git_wtf.yaml:4 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/source_control/git.py <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" && echo ansible-tmp-1479049433.89-122334128883345=""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /tmp/tmpPx4qzT TO /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py <127.0.0.1> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'chmod u+x /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/ /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py && sleep 0'""'""'' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C -tt 127.0.0.1 '/bin/sh -c '""'""'/usr/bin/python /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py; rm -rf ""/home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/"" > /dev/null 2>&1 && sleep 0'""'""'' fatal: [127.0.0.1]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""module_stderr"": ""OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016\r\ndebug1: Reading configuration data /home/l33tname/.ssh/config\r\ndebug1: /home/l33tname/.ssh/config line 1: Applying options for 127.0.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 21589\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",1,git update fails every other time issue type bug report component name git ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables the default which shipped with fedora release twenty four os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary git clone fails every other time with this error message task fatal failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module git py line in r n main r n file tmp ansible ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure to retry use limit home dotfiles setup retry steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts local tasks name clone icons git repo force yes dest home icons clarity name config icons command configure chdir home icons clarity expected results i expect that it works everytime not only every second time actual results task task path home dotfiles git wtf yaml using module file usr lib site packages ansible modules core source control git py establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp git py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c bin sh c chmod u x home ansible tmp ansible tmp home ansible tmp ansible tmp git py sleep establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c tt bin sh c usr bin python home ansible tmp ansible tmp git py rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name git module stderr openssh openssl fips may r reading configuration data home ssh config r home ssh config line applying options for r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to closed r n module stdout traceback most recent call last r n file tmp ansible mheepb ansible module git py line in r n main r n file tmp ansible mheepb ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure ,1 1711,6574449384.0,IssuesEvent,2017-09-11 12:56:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Instance creation failed => InvalidParameterValue: Value () for parameter groupId is invalid. The value cannot be empty"" ",affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_group / ec2 ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ElementaryOS Loki macOS Sierra 10.12.1 ##### SUMMARY While doing some tests suddenly i started receiving this error while creating machines: fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Instance creation failed => InvalidParameterValue: Value () for parameter groupId is invalid. The value cannot be empty""} I thought i messed something up so i reverted to my yesterday version of the code but the issue was still there. ##### STEPS TO REPRODUCE ``` --- - name: Engine Start hosts: localhost gather_facts: true connection: local tasks: - name: Create VPC ec2_vpc: state: present cidr_block: 172.29.0.0/16 resource_tags: { ""env"": ""{{env}}"" , ""Name"": ""{{project_name}}"" } region: ""{{ aws_region }}"" register: ec2_env_vpc - name: Create app private dns zone route53_zone: zone: ""{{domain_zone}}"" state: present vpc_id: ""{{ec2_env_vpc.vpc_id}}"" comment: 'Internal Zone for app' - name: Create security group ec2_group: name: ""{{ project_name }}_security_group"" description: ""{{ project_name }} security group"" region: ""{{ aws_region }}"" vpc_id: ""{{ec2_env_vpc.vpc_id}}"" rules: - proto: tcp from_port: 22 to_port: 22 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 80 to_port: 80 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 443 to_port: 443 cidr_ip: 0.0.0.0/0 rules_egress: - proto: all cidr_ip: 0.0.0.0/0 register: basic_firewall - name: Create an EC2 key ec2_key: name: ""{{ project_name }}-{{ env }}-key"" region: ""{{ aws_region }}"" register: ec2_key - name: Save private key copy: content: ""{{ ec2_key.key.private_key }}"" dest: ""./aws-{{ env }}-private.pem"" mode: 0600 when: ec2_key.changed - name: Create Redis ec2: key_name: ""{{ project_name }}-{{ env }}-key"" region: ""{{ aws_region }}"" group_id: ""{{ basic_firewall.group_id }}"" instance_type: ""{{ instance_type }}"" image: ""{{ ami }}"" wait: yes instance_tags: Name: ""redis"" env: ""{{env}}"" exact_count: 1 count_tag: Name: ""redis"" env: ""{{env}}"" register: ec2_redis ``` ##### EXPECTED RESULTS create machines ##### ACTUAL RESULTS errors ``` PLAY [Engine Start] ************************************************************ TASK [setup] ******************************************************************* <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660 `"" && echo ansible-tmp-1478219783.91-199174880918660=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpEYWlHH TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/setup <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/setup && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/setup; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [Create VPC] ************************************************************** task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:7 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835 `"" && echo ansible-tmp-1478219784.52-97557555770835=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpe3WMHj TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ec2_vpc <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ec2_vpc && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ec2_vpc; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""igw_id"": null, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""cidr_block"": ""172.29.0.0/16"", ""dns_hostnames"": true, ""dns_support"": true, ""ec2_url"": null, ""instance_tenancy"": ""default"", ""internet_gateway"": false, ""profile"": null, ""region"": ""eu-west-1"", ""resource_tags"": {""Name"": ""monx"", ""env"": ""prod-ifrastrc""}, ""route_tables"": null, ""security_token"": null, ""state"": ""present"", ""subnets"": null, ""validate_certs"": true, ""vpc_id"": null, ""wait"": false, ""wait_timeout"": ""300""}, ""module_name"": ""ec2_vpc""}, ""subnets"": [], ""vpc"": {""cidr_block"": ""172.29.0.0/16"", ""dhcp_options_id"": ""dopt-ab0907ce"", ""id"": ""vpc-259fea41"", ""region"": ""eu-west-1"", ""state"": ""available""}, ""vpc_id"": ""vpc-259fea41""} TASK [Create app private dns zone] ********************************************* task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:15 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408 `"" && echo ansible-tmp-1478219786.49-20807177095408=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpHpqi6a TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/route53_zone <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/route53_zone && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/route53_zone; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""comment"": ""Internal Zone for app"", ""ec2_url"": null, ""profile"": null, ""region"": null, ""security_token"": null, ""state"": ""present"", ""validate_certs"": true, ""vpc_id"": ""vpc-259fea41"", ""vpc_region"": null, ""zone"": ""twui.gonova.al""}, ""module_name"": ""route53_zone""}, ""set"": {""comment"": ""Internal Zone for app"", ""name"": ""twui.gonova.al."", ""private_zone"": false, ""vpc_id"": ""vpc-259fea41"", ""vpc_region"": null, ""zone_id"": ""ZZWP7CFJL8WAC""}} TASK [Create security group] *************************************************** task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:22 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398 `"" && echo ansible-tmp-1478219787.86-187686993911398=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpaHUqBj TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ec2_group <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ec2_group && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ec2_group; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""group_id"": ""sg-2e73a848"", ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""description"": ""monx security group"", ""ec2_url"": null, ""name"": ""monx_security_group"", ""profile"": null, ""purge_rules"": true, ""purge_rules_egress"": true, ""region"": ""eu-west-1"", ""rules"": [{""cidr_ip"": ""0.0.0.0/0"", ""from_port"": 22, ""proto"": ""tcp"", ""to_port"": 22}, {""cidr_ip"": ""0.0.0.0/0"", ""from_port"": 80, ""proto"": ""tcp"", ""to_port"": 80}, {""cidr_ip"": ""0.0.0.0/0"", ""from_port"": 443, ""proto"": ""tcp"", ""to_port"": 443}], ""rules_egress"": [{""cidr_ip"": ""0.0.0.0/0"", ""from_port"": null, ""proto"": -1, ""to_port"": null}], ""security_token"": null, ""state"": ""present"", ""validate_certs"": true, ""vpc_id"": ""vpc-259fea41""}, ""module_name"": ""ec2_group""}} TASK [Create an EC2 key] ******************************************************* task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:46 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217 `"" && echo ansible-tmp-1478219789.04-264318626940217=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpL0WO8L TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ec2_key <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ec2_key && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ec2_key; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""key_material"": null, ""name"": ""monx-prod-ifrastrc-key"", ""profile"": null, ""region"": ""eu-west-1"", ""security_token"": null, ""state"": ""present"", ""validate_certs"": true, ""wait"": false, ""wait_timeout"": ""300""}, ""module_name"": ""ec2_key""}, ""key"": {""fingerprint"": ""59:a6:0d:7d:b1:46:0d:20:e5:37:d3:16:b6:a6:17:b7:6a:03:af:6e"", ""name"": ""monx-prod-ifrastrc-key""}} TASK [Save private key] ******************************************************** task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:58 skipping: [localhost] => {""changed"": false, ""skip_reason"": ""Conditional check failed"", ""skipped"": true} TASK [Create Redis] ************************************************************ task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:65 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911 `"" && echo ansible-tmp-1478219789.89-32924999230911=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpcuaZrP TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ec2 <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ec2 && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ec2; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 1, ""count_tag"": ""{'Name': 'redis', 'env': 'prod-ifrastrc'}"", ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": 1, ""group"": null, ""group_id"": [""sg-2e73a848""], ""id"": null, ""image"": ""ami-1c4a046f"", ""instance_ids"": null, ""instance_profile_name"": null, ""instance_tags"": {""Name"": ""redis"", ""env"": ""prod-ifrastrc""}, ""instance_type"": ""t2.micro"", ""kernel"": null, ""key_name"": ""monx-prod-ifrastrc-key"", ""monitoring"": false, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""eu-west-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": false, ""user_data"": null, ""validate_certs"": true, ""volumes"": null, ""vpc_subnet_id"": null, ""wait"": true, ""wait_timeout"": ""300"", ""zone"": null}, ""module_name"": ""ec2""}, ""msg"": ""Instance creation failed => InvalidParameterValue: Value () for parameter groupId is invalid. The value cannot be empty""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/Users/tarak/Dropbox/IaaS/provision/provision.retry PLAY RECAP ********************************************************************* localhost : ok=5 changed=0 unreachable=0 failed=1 ```",True,"Instance creation failed => InvalidParameterValue: Value () for parameter groupId is invalid. The value cannot be empty"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_group / ec2 ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ElementaryOS Loki macOS Sierra 10.12.1 ##### SUMMARY While doing some tests suddenly i started receiving this error while creating machines: fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Instance creation failed => InvalidParameterValue: Value () for parameter groupId is invalid. The value cannot be empty""} I thought i messed something up so i reverted to my yesterday version of the code but the issue was still there. ##### STEPS TO REPRODUCE ``` --- - name: Engine Start hosts: localhost gather_facts: true connection: local tasks: - name: Create VPC ec2_vpc: state: present cidr_block: 172.29.0.0/16 resource_tags: { ""env"": ""{{env}}"" , ""Name"": ""{{project_name}}"" } region: ""{{ aws_region }}"" register: ec2_env_vpc - name: Create app private dns zone route53_zone: zone: ""{{domain_zone}}"" state: present vpc_id: ""{{ec2_env_vpc.vpc_id}}"" comment: 'Internal Zone for app' - name: Create security group ec2_group: name: ""{{ project_name }}_security_group"" description: ""{{ project_name }} security group"" region: ""{{ aws_region }}"" vpc_id: ""{{ec2_env_vpc.vpc_id}}"" rules: - proto: tcp from_port: 22 to_port: 22 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 80 to_port: 80 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 443 to_port: 443 cidr_ip: 0.0.0.0/0 rules_egress: - proto: all cidr_ip: 0.0.0.0/0 register: basic_firewall - name: Create an EC2 key ec2_key: name: ""{{ project_name }}-{{ env }}-key"" region: ""{{ aws_region }}"" register: ec2_key - name: Save private key copy: content: ""{{ ec2_key.key.private_key }}"" dest: ""./aws-{{ env }}-private.pem"" mode: 0600 when: ec2_key.changed - name: Create Redis ec2: key_name: ""{{ project_name }}-{{ env }}-key"" region: ""{{ aws_region }}"" group_id: ""{{ basic_firewall.group_id }}"" instance_type: ""{{ instance_type }}"" image: ""{{ ami }}"" wait: yes instance_tags: Name: ""redis"" env: ""{{env}}"" exact_count: 1 count_tag: Name: ""redis"" env: ""{{env}}"" register: ec2_redis ``` ##### EXPECTED RESULTS create machines ##### ACTUAL RESULTS errors ``` PLAY [Engine Start] ************************************************************ TASK [setup] ******************************************************************* <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660 `"" && echo ansible-tmp-1478219783.91-199174880918660=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpEYWlHH TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/setup <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/setup && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/setup; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219783.91-199174880918660/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [Create VPC] ************************************************************** task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:7 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835 `"" && echo ansible-tmp-1478219784.52-97557555770835=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpe3WMHj TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ec2_vpc <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ec2_vpc && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/ec2_vpc; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219784.52-97557555770835/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""igw_id"": null, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""cidr_block"": ""172.29.0.0/16"", ""dns_hostnames"": true, ""dns_support"": true, ""ec2_url"": null, ""instance_tenancy"": ""default"", ""internet_gateway"": false, ""profile"": null, ""region"": ""eu-west-1"", ""resource_tags"": {""Name"": ""monx"", ""env"": ""prod-ifrastrc""}, ""route_tables"": null, ""security_token"": null, ""state"": ""present"", ""subnets"": null, ""validate_certs"": true, ""vpc_id"": null, ""wait"": false, ""wait_timeout"": ""300""}, ""module_name"": ""ec2_vpc""}, ""subnets"": [], ""vpc"": {""cidr_block"": ""172.29.0.0/16"", ""dhcp_options_id"": ""dopt-ab0907ce"", ""id"": ""vpc-259fea41"", ""region"": ""eu-west-1"", ""state"": ""available""}, ""vpc_id"": ""vpc-259fea41""} TASK [Create app private dns zone] ********************************************* task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:15 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408 `"" && echo ansible-tmp-1478219786.49-20807177095408=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpHpqi6a TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/route53_zone <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/route53_zone && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/route53_zone; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219786.49-20807177095408/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""comment"": ""Internal Zone for app"", ""ec2_url"": null, ""profile"": null, ""region"": null, ""security_token"": null, ""state"": ""present"", ""validate_certs"": true, ""vpc_id"": ""vpc-259fea41"", ""vpc_region"": null, ""zone"": ""twui.gonova.al""}, ""module_name"": ""route53_zone""}, ""set"": {""comment"": ""Internal Zone for app"", ""name"": ""twui.gonova.al."", ""private_zone"": false, ""vpc_id"": ""vpc-259fea41"", ""vpc_region"": null, ""zone_id"": ""ZZWP7CFJL8WAC""}} TASK [Create security group] *************************************************** task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:22 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398 `"" && echo ansible-tmp-1478219787.86-187686993911398=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpaHUqBj TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ec2_group <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ec2_group && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/ec2_group; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219787.86-187686993911398/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""group_id"": ""sg-2e73a848"", ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""description"": ""monx security group"", ""ec2_url"": null, ""name"": ""monx_security_group"", ""profile"": null, ""purge_rules"": true, ""purge_rules_egress"": true, ""region"": ""eu-west-1"", ""rules"": [{""cidr_ip"": ""0.0.0.0/0"", ""from_port"": 22, ""proto"": ""tcp"", ""to_port"": 22}, {""cidr_ip"": ""0.0.0.0/0"", ""from_port"": 80, ""proto"": ""tcp"", ""to_port"": 80}, {""cidr_ip"": ""0.0.0.0/0"", ""from_port"": 443, ""proto"": ""tcp"", ""to_port"": 443}], ""rules_egress"": [{""cidr_ip"": ""0.0.0.0/0"", ""from_port"": null, ""proto"": -1, ""to_port"": null}], ""security_token"": null, ""state"": ""present"", ""validate_certs"": true, ""vpc_id"": ""vpc-259fea41""}, ""module_name"": ""ec2_group""}} TASK [Create an EC2 key] ******************************************************* task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:46 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217 `"" && echo ansible-tmp-1478219789.04-264318626940217=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpL0WO8L TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ec2_key <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ec2_key && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/ec2_key; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219789.04-264318626940217/"" > /dev/null 2>&1 && sleep 0' ok: [localhost] => {""changed"": false, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""key_material"": null, ""name"": ""monx-prod-ifrastrc-key"", ""profile"": null, ""region"": ""eu-west-1"", ""security_token"": null, ""state"": ""present"", ""validate_certs"": true, ""wait"": false, ""wait_timeout"": ""300""}, ""module_name"": ""ec2_key""}, ""key"": {""fingerprint"": ""59:a6:0d:7d:b1:46:0d:20:e5:37:d3:16:b6:a6:17:b7:6a:03:af:6e"", ""name"": ""monx-prod-ifrastrc-key""}} TASK [Save private key] ******************************************************** task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:58 skipping: [localhost] => {""changed"": false, ""skip_reason"": ""Conditional check failed"", ""skipped"": true} TASK [Create Redis] ************************************************************ task path: /Users/tarak/Dropbox/IaaS/provision/provision.yml:65 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tarak <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911 `"" && echo ansible-tmp-1478219789.89-32924999230911=""` echo $HOME/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/md/p4_n32hs4y5_d13b5s5w96dw0000gn/T/tmpcuaZrP TO /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ec2 <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ec2 && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/ec2; rm -rf ""/Users/tarak/.ansible/tmp/ansible-tmp-1478219789.89-32924999230911/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 1, ""count_tag"": ""{'Name': 'redis', 'env': 'prod-ifrastrc'}"", ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": 1, ""group"": null, ""group_id"": [""sg-2e73a848""], ""id"": null, ""image"": ""ami-1c4a046f"", ""instance_ids"": null, ""instance_profile_name"": null, ""instance_tags"": {""Name"": ""redis"", ""env"": ""prod-ifrastrc""}, ""instance_type"": ""t2.micro"", ""kernel"": null, ""key_name"": ""monx-prod-ifrastrc-key"", ""monitoring"": false, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""eu-west-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": false, ""user_data"": null, ""validate_certs"": true, ""volumes"": null, ""vpc_subnet_id"": null, ""wait"": true, ""wait_timeout"": ""300"", ""zone"": null}, ""module_name"": ""ec2""}, ""msg"": ""Instance creation failed => InvalidParameterValue: Value () for parameter groupId is invalid. The value cannot be empty""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/Users/tarak/Dropbox/IaaS/provision/provision.retry PLAY RECAP ********************************************************************* localhost : ok=5 changed=0 unreachable=0 failed=1 ```",1,instance creation failed invalidparametervalue value for parameter groupid is invalid the value cannot be empty issue type bug report component name group ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration just added py and ini with cache os environment elementaryos loki macos sierra summary while doing some tests suddenly i started receiving this error while creating machines fatal failed changed false failed true msg instance creation failed invalidparametervalue value for parameter groupid is invalid the value cannot be empty i thought i messed something up so i reverted to my yesterday version of the code but the issue was still there steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name engine start hosts localhost gather facts true connection local tasks name create vpc vpc state present cidr block resource tags env env name project name region aws region register env vpc name create app private dns zone zone zone domain zone state present vpc id env vpc vpc id comment internal zone for app name create security group group name project name security group description project name security group region aws region vpc id env vpc vpc id rules proto tcp from port to port cidr ip proto tcp from port to port cidr ip proto tcp from port to port cidr ip rules egress proto all cidr ip register basic firewall name create an key key name project name env key region aws region register key name save private key copy content key key private key dest aws env private pem mode when key changed name create redis key name project name env key region aws region group id basic firewall group id instance type instance type image ami wait yes instance tags name redis env env exact count count tag name redis env env register redis expected results create machines actual results errors play task establish local connection for user tarak exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders md t tmpeywlhh to users tarak ansible tmp ansible tmp setup exec bin sh c chmod u x users tarak ansible tmp ansible tmp users tarak ansible tmp ansible tmp setup sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users tarak ansible tmp ansible tmp setup rm rf users tarak ansible tmp ansible tmp dev null sleep ok task task path users tarak dropbox iaas provision provision yml establish local connection for user tarak exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders md t to users tarak ansible tmp ansible tmp vpc exec bin sh c chmod u x users tarak ansible tmp ansible tmp users tarak ansible tmp ansible tmp vpc sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users tarak ansible tmp ansible tmp vpc rm rf users tarak ansible tmp ansible tmp dev null sleep ok changed false igw id null invocation module args aws access key null aws secret key null cidr block dns hostnames true dns support true url null instance tenancy default internet gateway false profile null region eu west resource tags name monx env prod ifrastrc route tables null security token null state present subnets null validate certs true vpc id null wait false wait timeout module name vpc subnets vpc cidr block dhcp options id dopt id vpc region eu west state available vpc id vpc task task path users tarak dropbox iaas provision provision yml establish local connection for user tarak exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders md t to users tarak ansible tmp ansible tmp zone exec bin sh c chmod u x users tarak ansible tmp ansible tmp users tarak ansible tmp ansible tmp zone sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users tarak ansible tmp ansible tmp zone rm rf users tarak ansible tmp ansible tmp dev null sleep ok changed false invocation module args aws access key null aws secret key null comment internal zone for app url null profile null region null security token null state present validate certs true vpc id vpc vpc region null zone twui gonova al module name zone set comment internal zone for app name twui gonova al private zone false vpc id vpc vpc region null zone id task task path users tarak dropbox iaas provision provision yml establish local connection for user tarak exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders md t tmpahuqbj to users tarak ansible tmp ansible tmp group exec bin sh c chmod u x users tarak ansible tmp ansible tmp users tarak ansible tmp ansible tmp group sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users tarak ansible tmp ansible tmp group rm rf users tarak ansible tmp ansible tmp dev null sleep ok changed false group id sg invocation module args aws access key null aws secret key null description monx security group url null name monx security group profile null purge rules true purge rules egress true region eu west rules rules egress security token null state present validate certs true vpc id vpc module name group task task path users tarak dropbox iaas provision provision yml establish local connection for user tarak exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders md t to users tarak ansible tmp ansible tmp key exec bin sh c chmod u x users tarak ansible tmp ansible tmp users tarak ansible tmp ansible tmp key sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users tarak ansible tmp ansible tmp key rm rf users tarak ansible tmp ansible tmp dev null sleep ok changed false invocation module args aws access key null aws secret key null url null key material null name monx prod ifrastrc key profile null region eu west security token null state present validate certs true wait false wait timeout module name key key fingerprint af name monx prod ifrastrc key task task path users tarak dropbox iaas provision provision yml skipping changed false skip reason conditional check failed skipped true task task path users tarak dropbox iaas provision provision yml establish local connection for user tarak exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders md t tmpcuazrp to users tarak ansible tmp ansible tmp exec bin sh c chmod u x users tarak ansible tmp ansible tmp users tarak ansible tmp ansible tmp sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users tarak ansible tmp ansible tmp rm rf users tarak ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args assign public ip false aws access key null aws secret key null count count tag name redis env prod ifrastrc ebs optimized false url null exact count group null group id id null image ami instance ids null instance profile name null instance tags name redis env prod ifrastrc instance type micro kernel null key name monx prod ifrastrc key monitoring false network interfaces null placement group null private ip null profile null ramdisk null region eu west security token null source dest check true spot launch group null spot price null spot type one time spot wait timeout state present tenancy default termination protection false user data null validate certs true volumes null vpc subnet id null wait true wait timeout zone null module name msg instance creation failed invalidparametervalue value for parameter groupid is invalid the value cannot be empty no more hosts left to retry use limit users tarak dropbox iaas provision provision retry play recap localhost ok changed unreachable failed ,1 758,4351996319.0,IssuesEvent,2016-08-01 03:35:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Bug: Check in unarchive module whether 'dest' is writable should be removed.,bug_report feature_idea waiting_on_maintainer,"##### Issue Type: Bug in ```unarchive``` module. ##### Ansible Version: ```ansible 1.9.0.1``` ##### Ansible Configuration: n/a ##### Environment: n/a ##### Summary: The ```unarchive``` module checks whether the ```dest``` directory is writable before unpacking the archive: ``` if not os.access(dest, os.W_OK): module.fail_json(msg=""Destination '%s' not writable"" % dest) ``` While this is certainly well intended it prevents archives from being unpacked that don't actually create files in ```dest``` but only in (writable!) sub-directories. For instance an archive like this ``` # tar tvf myarchive.tar /tmp/file1 /tmp/file2 ``` will trigger this error if the ```unarchive``` module detects that the ```/``` directory is not writable even if ```/tmp``` was. ##### Steps To Reproduce: n/a ##### Expected Results: Being able to use ```unarchive``` with archives similar to the one as described in the Summary. I believe the check whether ```dest``` is writable should be removed. ##### Actual Results: Currently the ```unarchive``` module reports a ``` msg: Destination '/' not writable ``` message. ",True,"Bug: Check in unarchive module whether 'dest' is writable should be removed. - ##### Issue Type: Bug in ```unarchive``` module. ##### Ansible Version: ```ansible 1.9.0.1``` ##### Ansible Configuration: n/a ##### Environment: n/a ##### Summary: The ```unarchive``` module checks whether the ```dest``` directory is writable before unpacking the archive: ``` if not os.access(dest, os.W_OK): module.fail_json(msg=""Destination '%s' not writable"" % dest) ``` While this is certainly well intended it prevents archives from being unpacked that don't actually create files in ```dest``` but only in (writable!) sub-directories. For instance an archive like this ``` # tar tvf myarchive.tar /tmp/file1 /tmp/file2 ``` will trigger this error if the ```unarchive``` module detects that the ```/``` directory is not writable even if ```/tmp``` was. ##### Steps To Reproduce: n/a ##### Expected Results: Being able to use ```unarchive``` with archives similar to the one as described in the Summary. I believe the check whether ```dest``` is writable should be removed. ##### Actual Results: Currently the ```unarchive``` module reports a ``` msg: Destination '/' not writable ``` message. ",1,bug check in unarchive module whether dest is writable should be removed issue type bug in unarchive module ansible version ansible ansible configuration n a environment n a summary the unarchive module checks whether the dest directory is writable before unpacking the archive if not os access dest os w ok module fail json msg destination s not writable dest while this is certainly well intended it prevents archives from being unpacked that don t actually create files in dest but only in writable sub directories for instance an archive like this tar tvf myarchive tar tmp tmp will trigger this error if the unarchive module detects that the directory is not writable even if tmp was steps to reproduce n a expected results being able to use unarchive with archives similar to the one as described in the summary i believe the check whether dest is writable should be removed actual results currently the unarchive module reports a msg destination not writable message ,1 1849,6577390574.0,IssuesEvent,2017-09-12 00:34:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,vsphere_guest: vm_nic should include manual MAC change feature,affects_2.3 bug_report cloud feature_idea vmware waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY Currently during VM reconfiguring following options are supported for vm_nic. ``` vm_nic: nic1: type: vmxnet3 network: VM Network network_type: standard ``` I think this should be extended with feature to define MAC address manually, adding address_type: manual address: ""00:0c:29:ac:70:96"" Final look might be like: ``` vm_nic: nic1: type: vmxnet3 network: VM Network network_type: standard address_type: manual address: ""00:0c:29:ac:70:96"" ``` This functionality looks like might be supported by pysphere, but currently not implemented in Ansible. This feature might be useful when Ansible is used to rebuild same VMs multiple times and there are static DHCP leases configured for exact MAC addresses. ",True,"vsphere_guest: vm_nic should include manual MAC change feature - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY Currently during VM reconfiguring following options are supported for vm_nic. ``` vm_nic: nic1: type: vmxnet3 network: VM Network network_type: standard ``` I think this should be extended with feature to define MAC address manually, adding address_type: manual address: ""00:0c:29:ac:70:96"" Final look might be like: ``` vm_nic: nic1: type: vmxnet3 network: VM Network network_type: standard address_type: manual address: ""00:0c:29:ac:70:96"" ``` This functionality looks like might be supported by pysphere, but currently not implemented in Ansible. This feature might be useful when Ansible is used to rebuild same VMs multiple times and there are static DHCP leases configured for exact MAC addresses. ",1,vsphere guest vm nic should include manual mac change feature issue type bug report component name vsphere guest ansible version n a summary currently during vm reconfiguring following options are supported for vm nic vm nic type network vm network network type standard i think this should be extended with feature to define mac address manually adding address type manual address ac final look might be like vm nic type network vm network network type standard address type manual address ac this functionality looks like might be supported by pysphere but currently not implemented in ansible this feature might be useful when ansible is used to rebuild same vms multiple times and there are static dhcp leases configured for exact mac addresses ,1 751,4351333863.0,IssuesEvent,2016-07-31 20:02:23,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Docker hostname doesn't work with net: host,bug_report cloud docker waiting_on_maintainer,"## Issue Type Bug Report ## Component Name _docker module ## Ansible Version ``` ansible --version ansible 2.0.0.2 config file = configured module search path = Default w/o overrides ``` ## Ansible Configuration No configuration changes ## Environment I'm running ansible inside this docker container: https://hub.docker.com/r/williamyeh/ansible/ My `Dockerfile`: ``` FROM williamyeh/ansible:debian8 RUN apt-get update && \ apt-get install -y ssh \ rsync \ python-httplib2 # Install Docker adn Docker Compose Galaxy modules # TODO: Download roles with specific version # Github Issue: https://github.com/ansible/ansible/issues/13886 ENV ansible_docker_version=1.6.0 RUN ansible-galaxy install franklinkim.docker ENV ansible_docker_compose_version=1.2.1 RUN ansible-galaxy install franklinkim.docker-compose ENV ansible_node_version=2.0.2 RUN ansible-galaxy install geerlingguy.nodejs ENV ansible_ansistrano_deploy_version=1.3.0 ansible_ansistrano_rollback_version=1.2.0 RUN ansible-galaxy install carlosbuenosvinos.ansistrano-deploy carlosbuenosvinos.ansistrano-rollback ``` ## Summary When I try this: ``` - name: Start proxy container docker: name: proxy hostname: proxy image: user/proxy-nginx state: started restart_policy: always net: host volumes: - ""/home/user/nginx/config:/etc/nginx:ro"" ``` I get the following error: ``` FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: Conflicting options: -h and the network mode (--net)""} ``` When I remove the `hostname` property the issue is gone. ## Steps To Reproduce Build container: ``` docker build --no-cache=true --tag='you/ansible-provisioning:0.0.3' . ``` Start container (link key and playbook directory): ``` docker run --rm \ -it --name=ansible \ -v $project_dirirectory:/ansible:ro \ -v $ssh_key_path:/root/.ssh/id_rsa:ro \ -v $ssh_key_path.pub:/root/.ssh/id_rsa.pub:ro \ --workdir=/ansible you/ansible-provisioning:0.0.3 bash ``` Inside the container run: ``` ansible-playbook server-setup.yml -i hosts/hosts ``` ## Expected Results Run docker container with host name and configured to use the host network. ## Actual Results Error message: ``` FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: Conflicting options: -h and the network mode (--net)""} ``` Container is not started.",True,"Docker hostname doesn't work with net: host - ## Issue Type Bug Report ## Component Name _docker module ## Ansible Version ``` ansible --version ansible 2.0.0.2 config file = configured module search path = Default w/o overrides ``` ## Ansible Configuration No configuration changes ## Environment I'm running ansible inside this docker container: https://hub.docker.com/r/williamyeh/ansible/ My `Dockerfile`: ``` FROM williamyeh/ansible:debian8 RUN apt-get update && \ apt-get install -y ssh \ rsync \ python-httplib2 # Install Docker adn Docker Compose Galaxy modules # TODO: Download roles with specific version # Github Issue: https://github.com/ansible/ansible/issues/13886 ENV ansible_docker_version=1.6.0 RUN ansible-galaxy install franklinkim.docker ENV ansible_docker_compose_version=1.2.1 RUN ansible-galaxy install franklinkim.docker-compose ENV ansible_node_version=2.0.2 RUN ansible-galaxy install geerlingguy.nodejs ENV ansible_ansistrano_deploy_version=1.3.0 ansible_ansistrano_rollback_version=1.2.0 RUN ansible-galaxy install carlosbuenosvinos.ansistrano-deploy carlosbuenosvinos.ansistrano-rollback ``` ## Summary When I try this: ``` - name: Start proxy container docker: name: proxy hostname: proxy image: user/proxy-nginx state: started restart_policy: always net: host volumes: - ""/home/user/nginx/config:/etc/nginx:ro"" ``` I get the following error: ``` FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: Conflicting options: -h and the network mode (--net)""} ``` When I remove the `hostname` property the issue is gone. ## Steps To Reproduce Build container: ``` docker build --no-cache=true --tag='you/ansible-provisioning:0.0.3' . ``` Start container (link key and playbook directory): ``` docker run --rm \ -it --name=ansible \ -v $project_dirirectory:/ansible:ro \ -v $ssh_key_path:/root/.ssh/id_rsa:ro \ -v $ssh_key_path.pub:/root/.ssh/id_rsa.pub:ro \ --workdir=/ansible you/ansible-provisioning:0.0.3 bash ``` Inside the container run: ``` ansible-playbook server-setup.yml -i hosts/hosts ``` ## Expected Results Run docker container with host name and configured to use the host network. ## Actual Results Error message: ``` FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: Conflicting options: -h and the network mode (--net)""} ``` Container is not started.",1,docker hostname doesn t work with net host issue type bug report component name docker module ansible version ansible version ansible config file configured module search path default w o overrides ansible configuration no configuration changes environment i m running ansible inside this docker container my dockerfile from williamyeh ansible run apt get update apt get install y ssh rsync python install docker adn docker compose galaxy modules todo download roles with specific version github issue env ansible docker version run ansible galaxy install franklinkim docker env ansible docker compose version run ansible galaxy install franklinkim docker compose env ansible node version run ansible galaxy install geerlingguy nodejs env ansible ansistrano deploy version ansible ansistrano rollback version run ansible galaxy install carlosbuenosvinos ansistrano deploy carlosbuenosvinos ansistrano rollback summary when i try this name start proxy container docker name proxy hostname proxy image user proxy nginx state started restart policy always net host volumes home user nginx config etc nginx ro i get the following error failed changed false failed true msg docker api error conflicting options h and the network mode net when i remove the hostname property the issue is gone steps to reproduce build container docker build no cache true tag you ansible provisioning start container link key and playbook directory docker run rm it name ansible v project dirirectory ansible ro v ssh key path root ssh id rsa ro v ssh key path pub root ssh id rsa pub ro workdir ansible you ansible provisioning bash inside the container run ansible playbook server setup yml i hosts hosts expected results run docker container with host name and configured to use the host network actual results error message failed changed false failed true msg docker api error conflicting options h and the network mode net container is not started ,1 1349,5790605844.0,IssuesEvent,2017-05-02 01:20:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ansible 2.0.2.0 and later break Webmin port test,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME uri module ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = /home/ansible/library/:/usr/share/ansible/library/ ``` ##### CONFIGURATION ``` [defaults] # some basic default values... hostfile = /etc/ansible/hosts library = ~/library/:/usr/share/ansible/library/ remote_tmp = ~/.ansible/tmp pattern = * forks = 5 poll_interval = 15 remote_user = root sudo_user = root #ask_sudo_pass = True #ask_pass = True transport = smart remote_port = 22 module_lang = C ``` ##### OS / ENVIRONMENT Local: Ansible is running on a TurnKey GNU/Linux Ansible appliance version 14.1 (based on Debian Jessie) Remote: LXC is running on a TurnKey GNU/Linux LXC appliance version 14.1. Chifflier's ansible-lxc-ssh plugin is used to connect to containers https://github.com/chifflier/ansible-lxc-ssh ##### SUMMARY Ansible versions 2.0.2.0 and later break a simple task used test a Webmin port. The task has worked from versions 1.9 through 2.0.1.0. ##### STEPS TO REPRODUCE The failing task is part of a playbook designed to create an LXC container running a specified TurnKey appliance, and then run a series of tests that verify the appliance is functional. The failing task uses the uri module to connect to the container's Webmin port (12321). It is considered successful if one of the 'good' response codes is returned. Ansible task that fails ``` vars: ... good_codes: [200, 201, 202, 300, 301, 302, 303, 304, 307, 308, 400, 401, 403] tasks: ... - name: Test appliance webmin tags: test uri: url: ""https://{{container}}:12321/"" validate_certs: no status_code: ""{{good_codes}}"" register: webmin ignore_errors: yes until: webmin|success delay: 10 retries: 1 when: app not in ['tkldev'] ``` ##### EXPECTED RESULTS Results from version 2.0.1.0 (passed) ``` TASK [Test appliance webmin] *************************************************** task path: /home/ansible/playbooks/webmin-test.yml:139 ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r -tt lxc '/bin/sh -c '""'""'mkdir -p ""` echo ~/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122 `"" && echo ""` echo ~/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122 `""'""'""'' PUT /tmp/tmpJw5Mg0 TO /root/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122/uri SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r '[lxc]' ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r -tt lxc '/bin/sh -c '""'""'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122/uri; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122/"" > /dev/null 2>&1'""'""'' ok: [lxc] => {""cache_control"": ""no-store, no-cache, must-revalidate, post-check=0, pre-check=0"", ""changed"": false, ""connection"": ""close"", ""content_location"": ""https://drupal8-natbr0-container:12321/"", ""content_security_policy"": ""script-src 'self' 'unsafe-inline' 'unsafe-eval'; frame-src 'self'; child-src 'self'"", ""content_type"": ""text/html; Charset=UTF-8"", ""date"": ""Mon, 8 Aug 2016 03:06:44 GMT"", ""expires"": ""Thu, 1 Jan 1970 00:00:00 GMT"", ""invocation"": {""module_args"": {""backup"": null, ""body"": null, ""body_format"": ""raw"", ""content"": null, ""creates"": null, ""delimiter"": null, ""dest"": null, ""directory_mode"": null, ""follow"": false, ""follow_redirects"": ""safe"", ""force"": null, ""force_basic_auth"": false, ""group"": null, ""method"": ""GET"", ""mode"": null, ""owner"": null, ""password"": null, ""regexp"": null, ""remote_src"": null, ""removes"": null, ""return_content"": false, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""status_code"": [200, 201, 202, 300, 301, 302, 303, 304, 307, 308, 400, 401, 403], ""timeout"": 30, ""url"": ""https://drupal8-natbr0-container:12321/"", ""user"": null, ""validate_certs"": false}, ""module_name"": ""uri""}, ""pragma"": ""no-cache"", ""redirected"": false, ""server"": ""MiniServ/1.780"", ""set_cookie"": ""testing=1; path=/; secure; httpOnly; httpOnly"", ""status"": 200, ""x_frame_options"": ""SAMEORIGIN""} ``` ##### ACTUAL RESULTS Results from version 2.0.2.0 ``` TASK [Test appliance webmin] *************************************************** task path: /home/ansible/playbooks/webmin-test.yml:139 ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r lxc '/bin/sh -c '""'""'mkdir -p ""` echo ~/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525 `"" && echo ""` echo ~/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525 `""'""'""'' PUT /tmp/tmppg6gtg TO /root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r '[lxc]' ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r -tt lxc '/bin/sh -c '""'""'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/"" > /dev/null 2>&1'""'""'' fatal: [lxc]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""uri""}, ""module_stderr"": ""OpenSSH_6.7p1 Debian-5+deb8u2, OpenSSL 1.0.1k 8 Jan 2015\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 15131\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to lxc closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri\"", line 3310, in \r\n main()\r\n File \""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri\"", line 407, in main\r\n dict_headers, socket_timeout)\r\n File \""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri\"", line 334, in uri\r\n content = resp.read()\r\n File \""/usr/lib/python2.7/socket.py\"", line 351, in read\r\n data = self._sock.recv(rbufsize)\r\n File \""/usr/lib/python2.7/httplib.py\"", line 573, in read\r\n s = self.fp.read(amt)\r\n File \""/usr/lib/python2.7/socket.py\"", line 380, in read\r\n data = self._sock.recv(left)\r\n File \""/usr/lib/python2.7/ssl.py\"", line 714, in recv\r\n return self.read(buflen)\r\n File \""/usr/lib/python2.7/ssl.py\"", line 608, in read\r\n v = self._sslobj.read(len or 1024)\r\nssl.SSLError: ('The read operation timed out',)\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ...ignoring ``` ##### COMMENTS I think the problem may be related to issue #3437 ""Discrepancy in uri module behavior between 2.0.1.0 and 2.0.2.0"", although I tried the fix suggested for #3437 and it did not help. The problem seems unique to Webmin, as several tests of other ports continue to function. ",True,"Ansible 2.0.2.0 and later break Webmin port test - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME uri module ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = /home/ansible/library/:/usr/share/ansible/library/ ``` ##### CONFIGURATION ``` [defaults] # some basic default values... hostfile = /etc/ansible/hosts library = ~/library/:/usr/share/ansible/library/ remote_tmp = ~/.ansible/tmp pattern = * forks = 5 poll_interval = 15 remote_user = root sudo_user = root #ask_sudo_pass = True #ask_pass = True transport = smart remote_port = 22 module_lang = C ``` ##### OS / ENVIRONMENT Local: Ansible is running on a TurnKey GNU/Linux Ansible appliance version 14.1 (based on Debian Jessie) Remote: LXC is running on a TurnKey GNU/Linux LXC appliance version 14.1. Chifflier's ansible-lxc-ssh plugin is used to connect to containers https://github.com/chifflier/ansible-lxc-ssh ##### SUMMARY Ansible versions 2.0.2.0 and later break a simple task used test a Webmin port. The task has worked from versions 1.9 through 2.0.1.0. ##### STEPS TO REPRODUCE The failing task is part of a playbook designed to create an LXC container running a specified TurnKey appliance, and then run a series of tests that verify the appliance is functional. The failing task uses the uri module to connect to the container's Webmin port (12321). It is considered successful if one of the 'good' response codes is returned. Ansible task that fails ``` vars: ... good_codes: [200, 201, 202, 300, 301, 302, 303, 304, 307, 308, 400, 401, 403] tasks: ... - name: Test appliance webmin tags: test uri: url: ""https://{{container}}:12321/"" validate_certs: no status_code: ""{{good_codes}}"" register: webmin ignore_errors: yes until: webmin|success delay: 10 retries: 1 when: app not in ['tkldev'] ``` ##### EXPECTED RESULTS Results from version 2.0.1.0 (passed) ``` TASK [Test appliance webmin] *************************************************** task path: /home/ansible/playbooks/webmin-test.yml:139 ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r -tt lxc '/bin/sh -c '""'""'mkdir -p ""` echo ~/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122 `"" && echo ""` echo ~/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122 `""'""'""'' PUT /tmp/tmpJw5Mg0 TO /root/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122/uri SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r '[lxc]' ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r -tt lxc '/bin/sh -c '""'""'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122/uri; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470625603.84-183242562950122/"" > /dev/null 2>&1'""'""'' ok: [lxc] => {""cache_control"": ""no-store, no-cache, must-revalidate, post-check=0, pre-check=0"", ""changed"": false, ""connection"": ""close"", ""content_location"": ""https://drupal8-natbr0-container:12321/"", ""content_security_policy"": ""script-src 'self' 'unsafe-inline' 'unsafe-eval'; frame-src 'self'; child-src 'self'"", ""content_type"": ""text/html; Charset=UTF-8"", ""date"": ""Mon, 8 Aug 2016 03:06:44 GMT"", ""expires"": ""Thu, 1 Jan 1970 00:00:00 GMT"", ""invocation"": {""module_args"": {""backup"": null, ""body"": null, ""body_format"": ""raw"", ""content"": null, ""creates"": null, ""delimiter"": null, ""dest"": null, ""directory_mode"": null, ""follow"": false, ""follow_redirects"": ""safe"", ""force"": null, ""force_basic_auth"": false, ""group"": null, ""method"": ""GET"", ""mode"": null, ""owner"": null, ""password"": null, ""regexp"": null, ""remote_src"": null, ""removes"": null, ""return_content"": false, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""status_code"": [200, 201, 202, 300, 301, 302, 303, 304, 307, 308, 400, 401, 403], ""timeout"": 30, ""url"": ""https://drupal8-natbr0-container:12321/"", ""user"": null, ""validate_certs"": false}, ""module_name"": ""uri""}, ""pragma"": ""no-cache"", ""redirected"": false, ""server"": ""MiniServ/1.780"", ""set_cookie"": ""testing=1; path=/; secure; httpOnly; httpOnly"", ""status"": 200, ""x_frame_options"": ""SAMEORIGIN""} ``` ##### ACTUAL RESULTS Results from version 2.0.2.0 ``` TASK [Test appliance webmin] *************************************************** task path: /home/ansible/playbooks/webmin-test.yml:139 ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r lxc '/bin/sh -c '""'""'mkdir -p ""` echo ~/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525 `"" && echo ""` echo ~/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525 `""'""'""'' PUT /tmp/tmppg6gtg TO /root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r '[lxc]' ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r -tt lxc '/bin/sh -c '""'""'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri; rm -rf ""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/"" > /dev/null 2>&1'""'""'' fatal: [lxc]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""uri""}, ""module_stderr"": ""OpenSSH_6.7p1 Debian-5+deb8u2, OpenSSL 1.0.1k 8 Jan 2015\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 15131\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to lxc closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri\"", line 3310, in \r\n main()\r\n File \""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri\"", line 407, in main\r\n dict_headers, socket_timeout)\r\n File \""/root/.ansible/tmp/ansible-tmp-1470679357.66-240132650115525/uri\"", line 334, in uri\r\n content = resp.read()\r\n File \""/usr/lib/python2.7/socket.py\"", line 351, in read\r\n data = self._sock.recv(rbufsize)\r\n File \""/usr/lib/python2.7/httplib.py\"", line 573, in read\r\n s = self.fp.read(amt)\r\n File \""/usr/lib/python2.7/socket.py\"", line 380, in read\r\n data = self._sock.recv(left)\r\n File \""/usr/lib/python2.7/ssl.py\"", line 714, in recv\r\n return self.read(buflen)\r\n File \""/usr/lib/python2.7/ssl.py\"", line 608, in read\r\n v = self._sslobj.read(len or 1024)\r\nssl.SSLError: ('The read operation timed out',)\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ...ignoring ``` ##### COMMENTS I think the problem may be related to issue #3437 ""Discrepancy in uri module behavior between 2.0.1.0 and 2.0.2.0"", although I tried the fix suggested for #3437 and it did not help. The problem seems unique to Webmin, as several tests of other ports continue to function. ",1,ansible and later break webmin port test issue type bug report component name uri module ansible version ansible config file etc ansible ansible cfg configured module search path home ansible library usr share ansible library configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables some basic default values hostfile etc ansible hosts library library usr share ansible library remote tmp ansible tmp pattern forks poll interval remote user root sudo user root ask sudo pass true ask pass true transport smart remote port module lang c os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific local ansible is running on a turnkey gnu linux ansible appliance version based on debian jessie remote lxc is running on a turnkey gnu linux lxc appliance version chifflier s ansible lxc ssh plugin is used to connect to containers summary ansible versions and later break a simple task used test a webmin port the task has worked from versions through steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the failing task is part of a playbook designed to create an lxc container running a specified turnkey appliance and then run a series of tests that verify the appliance is functional the failing task uses the uri module to connect to the container s webmin port it is considered successful if one of the good response codes is returned ansible task that fails vars good codes tasks name test appliance webmin tags test uri url validate certs no status code good codes register webmin ignore errors yes until webmin success delay retries when app not in expected results results from version passed task task path home ansible playbooks webmin test yml establish ssh connection for user root ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home ansible ansible cp ansible ssh h p r tt lxc bin sh c mkdir p echo ansible tmp ansible tmp echo echo ansible tmp ansible tmp put tmp to root ansible tmp ansible tmp uri ssh exec sftp b c o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home ansible ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home ansible ansible cp ansible ssh h p r tt lxc bin sh c lang c lc all c lc messages c usr bin python root ansible tmp ansible tmp uri rm rf root ansible tmp ansible tmp dev null ok cache control no store no cache must revalidate post check pre check changed false connection close content location content security policy script src self unsafe inline unsafe eval frame src self child src self content type text html charset utf date mon aug gmt expires thu jan gmt invocation module args backup null body null body format raw content null creates null delimiter null dest null directory mode null follow false follow redirects safe force null force basic auth false group null method get mode null owner null password null regexp null remote src null removes null return content false selevel null serole null setype null seuser null src null status code timeout url user null validate certs false module name uri pragma no cache redirected false server miniserv set cookie testing path secure httponly httponly status x frame options sameorigin actual results results from version task task path home ansible playbooks webmin test yml establish ssh connection for user root ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home ansible ansible cp ansible ssh h p r lxc bin sh c mkdir p echo ansible tmp ansible tmp echo echo ansible tmp ansible tmp put tmp to root ansible tmp ansible tmp uri ssh exec sftp b c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home ansible ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home ansible ansible cp ansible ssh h p r tt lxc bin sh c lang c lc all c lc messages c usr bin python root ansible tmp ansible tmp uri rm rf root ansible tmp ansible tmp dev null fatal failed changed false failed true invocation module name uri module stderr openssh debian openssl jan r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to lxc closed r n module stdout traceback most recent call last r n file root ansible tmp ansible tmp uri line in r n main r n file root ansible tmp ansible tmp uri line in main r n dict headers socket timeout r n file root ansible tmp ansible tmp uri line in uri r n content resp read r n file usr lib socket py line in read r n data self sock recv rbufsize r n file usr lib httplib py line in read r n s self fp read amt r n file usr lib socket py line in read r n data self sock recv left r n file usr lib ssl py line in recv r n return self read buflen r n file usr lib ssl py line in read r n v self sslobj read len or r nssl sslerror the read operation timed out r n msg module failure parsed false ignoring comments i think the problem may be related to issue discrepancy in uri module behavior between and although i tried the fix suggested for and it did not help the problem seems unique to webmin as several tests of other ports continue to function ,1 832,4469601823.0,IssuesEvent,2016-08-25 13:37:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,vsphere_guest: should change backend to official API,cloud feature_idea vmware waiting_on_maintainer,"##### ISSUE TYPE - Feature Request ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION config file = configured module search path = Default w/o overrides ##### OS / ENVIRONMENT OSX 10.10.5 Centos 7.2.1511 ##### SUMMARY vsphere_guest depends on pysphere, which has been unmaintained since 2013. suggest we move the backend to official API from Vmware ie. https://github.com/vmware/pyvmomi ",True,"vsphere_guest: should change backend to official API - ##### ISSUE TYPE - Feature Request ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION config file = configured module search path = Default w/o overrides ##### OS / ENVIRONMENT OSX 10.10.5 Centos 7.2.1511 ##### SUMMARY vsphere_guest depends on pysphere, which has been unmaintained since 2013. suggest we move the backend to official API from Vmware ie. https://github.com/vmware/pyvmomi ",1,vsphere guest should change backend to official api issue type feature request component name vsphere guest ansible version ansible configuration config file configured module search path default w o overrides os environment osx centos summary vsphere guest depends on pysphere which has been unmaintained since suggest we move the backend to official api from vmware ie ,1 815,4441581899.0,IssuesEvent,2016-08-19 09:52:12,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Unarchive Error, No such file or directory",bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3c65c03a67) last updated 2016/08/15 16:01:24 (GMT +1000) lib/ansible/modules/core: (detached HEAD decb2ec9fa) last updated 2016/08/15 16:01:29 (GMT +1000) lib/ansible/modules/extras: (detached HEAD 61d5fe148c) last updated 2016/08/15 16:01:29 (GMT +1000) config file = /home/linus/Documents/ansible-playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu management node Centos 7 managed node ##### SUMMARY Gives No such file or directory error when added extra_opts: ""--strip-components=2"" ##### STEPS TO REPRODUCE ``` - name: unpack the artifacts unarchive: src: /usr/share/stuff.tar.gz dest: /usr/share/ extra_opts: ""--strip-components=2"" owner: nginx group: nginx copy: no ``` ##### EXPECTED RESULTS Changed ##### ACTUAL RESULTS The archive file structure is: dist/production/files I wan to strip the first two directories ``` fatal: [52.65.150.148]: FAILED! => {""changed"": true, ""dest"": ""/usr/share/"", ""extract_results"": {""cmd"": ""/bin/gtar -C \""/usr/share/\"" -xz --strip-components=2 --owner=\""nginx\"" --group=\""nginx\"" -f \""/usr/share/stuff.tar.gz\"""", ""err"": """", ""out"": """", ""rc"": 0}, ""failed"": true, ""gid"": 992, ""group"": ""nginx"", ""handler"": ""TgzArchive"", ""mode"": ""02775"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/usr/share/stuff/dist/production/'"", ""owner"": ""bitbucket"", ""size"": 4096, ""src"": ""/usr/share/stuff.tar.gz"", ""state"": ""directory"", ""uid"": 1003} ``` ",True,"Unarchive Error, No such file or directory - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3c65c03a67) last updated 2016/08/15 16:01:24 (GMT +1000) lib/ansible/modules/core: (detached HEAD decb2ec9fa) last updated 2016/08/15 16:01:29 (GMT +1000) lib/ansible/modules/extras: (detached HEAD 61d5fe148c) last updated 2016/08/15 16:01:29 (GMT +1000) config file = /home/linus/Documents/ansible-playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Ubuntu management node Centos 7 managed node ##### SUMMARY Gives No such file or directory error when added extra_opts: ""--strip-components=2"" ##### STEPS TO REPRODUCE ``` - name: unpack the artifacts unarchive: src: /usr/share/stuff.tar.gz dest: /usr/share/ extra_opts: ""--strip-components=2"" owner: nginx group: nginx copy: no ``` ##### EXPECTED RESULTS Changed ##### ACTUAL RESULTS The archive file structure is: dist/production/files I wan to strip the first two directories ``` fatal: [52.65.150.148]: FAILED! => {""changed"": true, ""dest"": ""/usr/share/"", ""extract_results"": {""cmd"": ""/bin/gtar -C \""/usr/share/\"" -xz --strip-components=2 --owner=\""nginx\"" --group=\""nginx\"" -f \""/usr/share/stuff.tar.gz\"""", ""err"": """", ""out"": """", ""rc"": 0}, ""failed"": true, ""gid"": 992, ""group"": ""nginx"", ""handler"": ""TgzArchive"", ""mode"": ""02775"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/usr/share/stuff/dist/production/'"", ""owner"": ""bitbucket"", ""size"": 4096, ""src"": ""/usr/share/stuff.tar.gz"", ""state"": ""directory"", ""uid"": 1003} ``` ",1,unarchive error no such file or directory issue type bug report component name unarchive ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home linus documents ansible playbooks ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu management node centos managed node summary gives no such file or directory error when added extra opts strip components steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name unpack the artifacts unarchive src usr share stuff tar gz dest usr share extra opts strip components owner nginx group nginx copy no expected results changed actual results the archive file structure is dist production files i wan to strip the first two directories fatal failed changed true dest usr share extract results cmd bin gtar c usr share xz strip components owner nginx group nginx f usr share stuff tar gz err out rc failed true gid group nginx handler tgzarchive mode msg unexpected error when accessing exploded file no such file or directory usr share stuff dist production owner bitbucket size src usr share stuff tar gz state directory uid ,1 1792,6575891417.0,IssuesEvent,2017-09-11 17:43:51,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,user module: Adding user with primary group keeps changed,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME user ##### ANSIBLE VERSION ``` ansible --version ansible 2.1.2.0 (stable-2.1 4c9ed1f4fb) last updated 2016/09/23 11:24:18 (GMT +200) lib/ansible/modules/core: (detached HEAD af67009d38) last updated 2016/09/23 11:27:16 (GMT +200) lib/ansible/modules/extras: (detached HEAD 1bde4310bc) last updated 2016/09/23 11:27:16 (GMT +200) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I'm creating a user with primary group and some other groups. When using the stable branch, the task from below keeps changed = true. I couldn't find out what is changed, though. Running the same task with ansible and the modules from devel works correct. Maybe you can merge the changes in ansible-modules-core in stable. Unfortantly, I can't tell which commits. ##### STEPS TO REPRODUCE ``` - name: add oracle user user: name=oracle group=oinstall groups=oinstall,dba password=foobar update_password=on_create ``` ##### EXPECTED RESULTS On second run, the change should be false and state ok. ``` TASK [oracle-12c-preparation : add oracle user] ******************************** ok: [mysecret] => {""append"": false, ""changed"": false, ""comment"": """", ""group"": 1001, ""groups"": ""oinstall,dba"", ""home"": ""/home/oracle"", ""move_home"": false, ""name"": ""oracle"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 1002} ``` ##### ACTUAL RESULTS State keeps changed. ``` TASK [oracle-12c-preparation : add oracle user] ******************************** changed: [mysecret] => {""append"": false, ""changed"": true, ""comment"": """", ""group"": 1001, ""groups"": ""oinstall,dba"", ""home"": ""/home/oracle"", ""move_home"": false, ""name"": ""oracle"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 1002} ``` ``` ``` ",True,"user module: Adding user with primary group keeps changed - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME user ##### ANSIBLE VERSION ``` ansible --version ansible 2.1.2.0 (stable-2.1 4c9ed1f4fb) last updated 2016/09/23 11:24:18 (GMT +200) lib/ansible/modules/core: (detached HEAD af67009d38) last updated 2016/09/23 11:27:16 (GMT +200) lib/ansible/modules/extras: (detached HEAD 1bde4310bc) last updated 2016/09/23 11:27:16 (GMT +200) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I'm creating a user with primary group and some other groups. When using the stable branch, the task from below keeps changed = true. I couldn't find out what is changed, though. Running the same task with ansible and the modules from devel works correct. Maybe you can merge the changes in ansible-modules-core in stable. Unfortantly, I can't tell which commits. ##### STEPS TO REPRODUCE ``` - name: add oracle user user: name=oracle group=oinstall groups=oinstall,dba password=foobar update_password=on_create ``` ##### EXPECTED RESULTS On second run, the change should be false and state ok. ``` TASK [oracle-12c-preparation : add oracle user] ******************************** ok: [mysecret] => {""append"": false, ""changed"": false, ""comment"": """", ""group"": 1001, ""groups"": ""oinstall,dba"", ""home"": ""/home/oracle"", ""move_home"": false, ""name"": ""oracle"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 1002} ``` ##### ACTUAL RESULTS State keeps changed. ``` TASK [oracle-12c-preparation : add oracle user] ******************************** changed: [mysecret] => {""append"": false, ""changed"": true, ""comment"": """", ""group"": 1001, ""groups"": ""oinstall,dba"", ""home"": ""/home/oracle"", ""move_home"": false, ""name"": ""oracle"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 1002} ``` ``` ``` ",1,user module adding user with primary group keeps changed issue type bug report component name user ansible version ansible version ansible stable last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary i m creating a user with primary group and some other groups when using the stable branch the task from below keeps changed true i couldn t find out what is changed though running the same task with ansible and the modules from devel works correct maybe you can merge the changes in ansible modules core in stable unfortantly i can t tell which commits steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name add oracle user user name oracle group oinstall groups oinstall dba password foobar update password on create expected results on second run the change should be false and state ok task ok append false changed false comment group groups oinstall dba home home oracle move home false name oracle password not logging password shell bin bash state present uid actual results state keeps changed task changed append false changed true comment group groups oinstall dba home home oracle move home false name oracle password not logging password shell bin bash state present uid ,1 802,4422179295.0,IssuesEvent,2016-08-16 00:59:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Subsequent clone uses depth 1,bug_report P3 waiting_on_maintainer,"##### Issue Type: Bug report ##### Component Name: git module ##### Ansible Version: ansible 1.8.4 ##### Environment: Ubuntu 14.10. Installed using `sudo pip install ansible` ##### Summary: I have two subsequent `git clone` from two different repos on github. The first clone uses `depth 1`. The second does not. Looks like Ansible still uses depth 1 for the second clone so checkout to a branch doesn't work. ##### Steps To Reproduce: ```yaml - name: clone project git: repo: git@github.com:user/app_1.git version: develop dest: ""{{ app_1_dir }}"" depth: 1 accept_hostkey: yes - name: clone project git: repo: git@github.com:user/app_2.git version: develop dest: ""{{ app_2_dir }}"" accept_hostkey: yes ``` ##### Expected Results: Both clones should be successful. ##### Actual Results: ``` PLAY [webservers] ************************************************************* GATHERING FACTS *************************************************************** ok: [virtualbox] TASK: [fail msg=""These tasks were made for Ubuntu 14.04 LTS""] ***************** skipping: [virtualbox] TASK: [clone project] ********************************************************* changed: [virtualbox] TASK: [clone project] ********************************************************* failed: [virtualbox] => {""failed"": true} msg: Failed to checkout develop ``` When replaying -- the same result. When commenting `depth: 1` in the first clone -- the same result. When I remove both cloned repos and re-run -- it goes ok. ",True,"Subsequent clone uses depth 1 - ##### Issue Type: Bug report ##### Component Name: git module ##### Ansible Version: ansible 1.8.4 ##### Environment: Ubuntu 14.10. Installed using `sudo pip install ansible` ##### Summary: I have two subsequent `git clone` from two different repos on github. The first clone uses `depth 1`. The second does not. Looks like Ansible still uses depth 1 for the second clone so checkout to a branch doesn't work. ##### Steps To Reproduce: ```yaml - name: clone project git: repo: git@github.com:user/app_1.git version: develop dest: ""{{ app_1_dir }}"" depth: 1 accept_hostkey: yes - name: clone project git: repo: git@github.com:user/app_2.git version: develop dest: ""{{ app_2_dir }}"" accept_hostkey: yes ``` ##### Expected Results: Both clones should be successful. ##### Actual Results: ``` PLAY [webservers] ************************************************************* GATHERING FACTS *************************************************************** ok: [virtualbox] TASK: [fail msg=""These tasks were made for Ubuntu 14.04 LTS""] ***************** skipping: [virtualbox] TASK: [clone project] ********************************************************* changed: [virtualbox] TASK: [clone project] ********************************************************* failed: [virtualbox] => {""failed"": true} msg: Failed to checkout develop ``` When replaying -- the same result. When commenting `depth: 1` in the first clone -- the same result. When I remove both cloned repos and re-run -- it goes ok. ",1,subsequent clone uses depth issue type bug report component name git module ansible version ansible environment ubuntu installed using sudo pip install ansible summary i have two subsequent git clone from two different repos on github the first clone uses depth the second does not looks like ansible still uses depth for the second clone so checkout to a branch doesn t work steps to reproduce yaml name clone project git repo git github com user app git version develop dest app dir depth accept hostkey yes name clone project git repo git github com user app git version develop dest app dir accept hostkey yes expected results both clones should be successful actual results play gathering facts ok task skipping task changed task failed failed true msg failed to checkout develop when replaying the same result when commenting depth in the first clone the same result when i remove both cloned repos and re run it goes ok ,1 1177,5096332342.0,IssuesEvent,2017-01-03 17:51:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,pip module: logging option ?,affects_2.1 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME pip module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = /opt/tmp/vagrant/homelab/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Orchestrator: Linux Ubuntu Trusty or Xenial Guest: various ##### SUMMARY when you do system package, there is a log file with activity history. python pip has a logging option https://pip.pypa.io/en/stable/reference/pip/#file-logging pip module should make it available. As a note, this remark is valid for gem too but I didn't find a native option. example ``` - pip: name=bottle version=0.11 log=/var/log/pip.log ``` should result in command ``` pip install bottle==0.11 --log /var/log/pip.log ``` Thanks ",True,"pip module: logging option ? - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME pip module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = /opt/tmp/vagrant/homelab/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Orchestrator: Linux Ubuntu Trusty or Xenial Guest: various ##### SUMMARY when you do system package, there is a log file with activity history. python pip has a logging option https://pip.pypa.io/en/stable/reference/pip/#file-logging pip module should make it available. As a note, this remark is valid for gem too but I didn't find a native option. example ``` - pip: name=bottle version=0.11 log=/var/log/pip.log ``` should result in command ``` pip install bottle==0.11 --log /var/log/pip.log ``` Thanks ",1,pip module logging option issue type feature idea component name pip module ansible version ansible version ansible config file opt tmp vagrant homelab ansible cfg configured module search path default w o overrides os environment orchestrator linux ubuntu trusty or xenial guest various summary when you do system package there is a log file with activity history python pip has a logging option pip module should make it available as a note this remark is valid for gem too but i didn t find a native option example pip name bottle version log var log pip log should result in command pip install bottle log var log pip log thanks ,1 1132,4998447713.0,IssuesEvent,2016-12-09 19:53:06,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,--diff doesn't show all differences with Template module,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Template ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3bac945147) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Using `--diff` when running template module doesn't show all the changes when both the template file and mode has changed. ##### STEPS TO REPRODUCE Task: ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0755 become: yes become_method: sudo ``` myconf.conf ``` { ""obj"": { ""type"": ""foo"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` Changed task w/ updated `mode` and updated `myconf.conf` ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0775 become: yes become_method: sudo ``` ``` { ""obj"": { ""type"": ""foo1"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` ##### EXPECTED RESULTS ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": {""mode"": ""0775"", ""path"": ""/abc/myconf.conf""}, ""before"": {""mode"": ""0755"", ""path"": ""/abc/myconf.conf""},{""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n}, ""gid"": 0, ""group"": ""root"", ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""delimiter"": null, ""dest"": ""/abc/myconf.conf"", ""diff_peek"": null, ""directory_mode"": null, ""follow"": true, ""force"": false, ""group"": null, ""mode"": 509, ""original_basename"": ""myconf.conf.j2"", ""owner"": null, ""path"": ""/abc/myconf.conf"", ""recurse"": false, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": null, ""validate"": null}}, ""mode"": ""0755"", ""owner"": ""root"", ""path"": ""/abc/myconf.conf"", ""size"": 138, ""state"": ""file"", ""uid"": 0} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ##### ACTUAL RESULTS Ansible playbook was run with `--diff` and `--check` flags. The behavior is the same whether `--check` flag was used or not. ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""before_header"": ""/abc/myconf.conf""}, ""invocation"": {""module_args"": {""dest"": ""/abc/myconf.conf"", ""mode"": 509, ""src"": ""myconf.conf.j2""}, ""module_name"": ""template""}} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, ``` NOTE: When only `mode` was updated with no changes to the `myconf.conf` file, the output is as expected as follows: ``` --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ",True,"--diff doesn't show all differences with Template module - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Template ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3bac945147) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Using `--diff` when running template module doesn't show all the changes when both the template file and mode has changed. ##### STEPS TO REPRODUCE Task: ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0755 become: yes become_method: sudo ``` myconf.conf ``` { ""obj"": { ""type"": ""foo"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` Changed task w/ updated `mode` and updated `myconf.conf` ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0775 become: yes become_method: sudo ``` ``` { ""obj"": { ""type"": ""foo1"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` ##### EXPECTED RESULTS ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": {""mode"": ""0775"", ""path"": ""/abc/myconf.conf""}, ""before"": {""mode"": ""0755"", ""path"": ""/abc/myconf.conf""},{""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n}, ""gid"": 0, ""group"": ""root"", ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""delimiter"": null, ""dest"": ""/abc/myconf.conf"", ""diff_peek"": null, ""directory_mode"": null, ""follow"": true, ""force"": false, ""group"": null, ""mode"": 509, ""original_basename"": ""myconf.conf.j2"", ""owner"": null, ""path"": ""/abc/myconf.conf"", ""recurse"": false, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": null, ""validate"": null}}, ""mode"": ""0755"", ""owner"": ""root"", ""path"": ""/abc/myconf.conf"", ""size"": 138, ""state"": ""file"", ""uid"": 0} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ##### ACTUAL RESULTS Ansible playbook was run with `--diff` and `--check` flags. The behavior is the same whether `--check` flag was used or not. ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""before_header"": ""/abc/myconf.conf""}, ""invocation"": {""module_args"": {""dest"": ""/abc/myconf.conf"", ""mode"": 509, ""src"": ""myconf.conf.j2""}, ""module_name"": ""template""}} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, ``` NOTE: When only `mode` was updated with no changes to the `myconf.conf` file, the output is as expected as follows: ``` --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ",1, diff doesn t show all differences with template module issue type bug report component name template ansible version ansible devel configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary using diff when running template module doesn t show all the changes when both the template file and mode has changed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used task name ensures abc myconf conf exists template src my template dest abc myconf conf mode become yes become method sudo myconf conf obj type foo pol a ansible processor vcpus a pol b b pol c c pol d d pol changed task w updated mode and updated myconf conf name ensures abc myconf conf exists template src my template dest abc myconf conf mode become yes become method sudo obj type pol a ansible processor vcpus a pol b b pol c c pol d d pol expected results changed changed true diff after mode path abc myconf conf before mode path abc myconf conf after n obj n type n pol n a a pol n b b pol n c c pol n d d pol n n n n n after header dynamically generated before n obj n type foo n pol n a a pol n b b pol n c c pol n d d pol n n gid group root invocation module args backup null content null delimiter null dest abc myconf conf diff peek null directory mode null follow true force false group null mode original basename myconf conf owner null path abc myconf conf recurse false regexp null remote src null selevel null serole null setype null seuser null src null state null validate null mode owner root path abc myconf conf size state file uid before abc myconf conf after dynamically generated obj type foo type pol a a pol b b pol before after mode mode path abc myconf conf actual results ansible playbook was run with diff and check flags the behavior is the same whether check flag was used or not changed changed true diff after n obj n type n pol n a a pol n b b pol n c c pol n d d pol n n n n n after header dynamically generated before n obj n type foo n pol n a a pol n b b pol n c c pol n d d pol n n n n n before header abc myconf conf invocation module args dest abc myconf conf mode src myconf conf module name template before abc myconf conf after dynamically generated obj type foo type pol a a pol b b pol note when only mode was updated with no changes to the myconf conf file the output is as expected as follows before after mode mode path abc myconf conf ,1 1690,6574179228.0,IssuesEvent,2017-09-11 11:50:50,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,negative lookahead assertion broken in Ansible 2,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME replace ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /home/pdxmft/ansible/config/ansible.cfg configured module search path = ['/usr/share/ansible/'] ``` ``` ##### CONFIGURATION ANSIBLE_HOSTS=/home/pdxmft/ansible/config/hosts ANSIBLE_CONFIG=/home/pdxmft/ansible/config/ansible.cfg ansible.cfg is stock ##### OS / ENVIRONMENT Server: RHEL 7.2 Client: RHEL 7.2 ##### SUMMARY ``` Negative lookahead assertions in regular expressions do not appear to be working in Ansible 2, but work perfectly in Ansible 1.9 ``` ##### STEPS TO REPRODUCE ``` I wrote the below to ensure certain options are set in a server's /etc/fstab file for CIS compliance: - name: Describe file system options set_fact: filesystems: - fs: /tmp options: - nodev - nosuid - fs: /home options: - nodev - fs: /dev/shm options: - nodev - nosuid - noexec - name: CIS - Set options for file systems replace: dest=/etc/fstab regexp='(^[/\-\w]*\s+{{item.0.fs}}\s+\w+\s+(?!.*\b{{item.1}}\b))([\w,]+)(\s+[0-9]\s+[0-9])$' replace='\1\2,{{item.1}}\3' with_subelements: - ""{{filesystems}}"" - options My fstab file starts out like this: /dev/mapper/VolGroup00-root / xfs defaults 1 1 UUID=55b51f79-af10-4590-88df-8aefeeedb3fc /boot xfs defaults 0 0 /dev/mapper/VolGroup00-home /home xfs defaults 0 0 /dev/mapper/VolGroup00-tmp /tmp xfs defaults 0 0 /dev/mapper/VolGroup00-var /var xfs defaults 0 0 UUID=c56d0641-b1ef-4ef5-ba3c-1dfb983e28ce swap swap defaults 0 0 The expectation is that options listed under a given file system name will be added to the options for that file system. For example: /dev/mapper/VolGroup00-home /home xfs defaults,nodev 0 0 /dev/mapper/VolGroup00-tmp /tmp xfs defaults,nodev,nosuid 0 0 This worked perfectly on Ansible version 1.9.4, but when I ported it to Ansible 2.1.2, it broke. When I run it on 2.1.2, Ansible no longer detects that the options have already been applied and applies them again. Each time the playbook is run against the servers, another set of options is added to the already existing set: /dev/mapper/VolGroup00-home /home xfs defaults,nodev,nodev 0 0 /dev/mapper/VolGroup00-tmp /tmp xfs defaults,nodev,nosuid,nodev,nosuid 0 0 ``` ##### EXPECTED RESULTS The options listed for each file system should be applied to the line for that file system just once, and should not be applied again if the options already exist in the line. ##### ACTUAL RESULTS The options are applied each time the playbook is run regardless of if the options are already present for the line. This occurs with Ansible 2, but not with Ansible 1.9.4. ``` TASK [CIS - Set options for file systems] ************************************** changed: [localhost] => (item=({u'fs': u'/tmp'}, u'nodev')) changed: [localhost] => (item=({u'fs': u'/tmp'}, u'nosuid')) changed: [localhost] => (item=({u'fs': u'/home'}, u'nodev')) ``` ",True,"negative lookahead assertion broken in Ansible 2 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME replace ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /home/pdxmft/ansible/config/ansible.cfg configured module search path = ['/usr/share/ansible/'] ``` ``` ##### CONFIGURATION ANSIBLE_HOSTS=/home/pdxmft/ansible/config/hosts ANSIBLE_CONFIG=/home/pdxmft/ansible/config/ansible.cfg ansible.cfg is stock ##### OS / ENVIRONMENT Server: RHEL 7.2 Client: RHEL 7.2 ##### SUMMARY ``` Negative lookahead assertions in regular expressions do not appear to be working in Ansible 2, but work perfectly in Ansible 1.9 ``` ##### STEPS TO REPRODUCE ``` I wrote the below to ensure certain options are set in a server's /etc/fstab file for CIS compliance: - name: Describe file system options set_fact: filesystems: - fs: /tmp options: - nodev - nosuid - fs: /home options: - nodev - fs: /dev/shm options: - nodev - nosuid - noexec - name: CIS - Set options for file systems replace: dest=/etc/fstab regexp='(^[/\-\w]*\s+{{item.0.fs}}\s+\w+\s+(?!.*\b{{item.1}}\b))([\w,]+)(\s+[0-9]\s+[0-9])$' replace='\1\2,{{item.1}}\3' with_subelements: - ""{{filesystems}}"" - options My fstab file starts out like this: /dev/mapper/VolGroup00-root / xfs defaults 1 1 UUID=55b51f79-af10-4590-88df-8aefeeedb3fc /boot xfs defaults 0 0 /dev/mapper/VolGroup00-home /home xfs defaults 0 0 /dev/mapper/VolGroup00-tmp /tmp xfs defaults 0 0 /dev/mapper/VolGroup00-var /var xfs defaults 0 0 UUID=c56d0641-b1ef-4ef5-ba3c-1dfb983e28ce swap swap defaults 0 0 The expectation is that options listed under a given file system name will be added to the options for that file system. For example: /dev/mapper/VolGroup00-home /home xfs defaults,nodev 0 0 /dev/mapper/VolGroup00-tmp /tmp xfs defaults,nodev,nosuid 0 0 This worked perfectly on Ansible version 1.9.4, but when I ported it to Ansible 2.1.2, it broke. When I run it on 2.1.2, Ansible no longer detects that the options have already been applied and applies them again. Each time the playbook is run against the servers, another set of options is added to the already existing set: /dev/mapper/VolGroup00-home /home xfs defaults,nodev,nodev 0 0 /dev/mapper/VolGroup00-tmp /tmp xfs defaults,nodev,nosuid,nodev,nosuid 0 0 ``` ##### EXPECTED RESULTS The options listed for each file system should be applied to the line for that file system just once, and should not be applied again if the options already exist in the line. ##### ACTUAL RESULTS The options are applied each time the playbook is run regardless of if the options are already present for the line. This occurs with Ansible 2, but not with Ansible 1.9.4. ``` TASK [CIS - Set options for file systems] ************************************** changed: [localhost] => (item=({u'fs': u'/tmp'}, u'nodev')) changed: [localhost] => (item=({u'fs': u'/tmp'}, u'nosuid')) changed: [localhost] => (item=({u'fs': u'/home'}, u'nodev')) ``` ",1,negative lookahead assertion broken in ansible issue type bug report component name replace ansible version ansible config file home pdxmft ansible config ansible cfg configured module search path configuration ansible hosts home pdxmft ansible config hosts ansible config home pdxmft ansible config ansible cfg ansible cfg is stock os environment server rhel client rhel summary negative lookahead assertions in regular expressions do not appear to be working in ansible but work perfectly in ansible steps to reproduce i wrote the below to ensure certain options are set in a server s etc fstab file for cis compliance name describe file system options set fact filesystems fs tmp options nodev nosuid fs home options nodev fs dev shm options nodev nosuid noexec name cis set options for file systems replace dest etc fstab regexp s item fs s w s b item b s s replace item with subelements filesystems options my fstab file starts out like this dev mapper root xfs defaults uuid boot xfs defaults dev mapper home home xfs defaults dev mapper tmp tmp xfs defaults dev mapper var var xfs defaults uuid swap swap defaults the expectation is that options listed under a given file system name will be added to the options for that file system for example dev mapper home home xfs defaults nodev dev mapper tmp tmp xfs defaults nodev nosuid this worked perfectly on ansible version but when i ported it to ansible it broke when i run it on ansible no longer detects that the options have already been applied and applies them again each time the playbook is run against the servers another set of options is added to the already existing set dev mapper home home xfs defaults nodev nodev dev mapper tmp tmp xfs defaults nodev nosuid nodev nosuid expected results the options listed for each file system should be applied to the line for that file system just once and should not be applied again if the options already exist in the line actual results the options are applied each time the playbook is run regardless of if the options are already present for the line this occurs with ansible but not with ansible task changed item u fs u tmp u nodev changed item u fs u tmp u nosuid changed item u fs u home u nodev ,1 1123,4990295085.0,IssuesEvent,2016-12-08 14:42:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,RFE/Feature Idea: os_object - be able to specify the amount of threads used in operations,affects_2.3 cloud feature_idea openstack waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME openstack/os_object ##### ANSIBLE VERSION Not relevant ##### CONFIGURATION Not relevant ##### OS / ENVIRONMENT Not relevant ##### SUMMARY swiftclient has built-in parameters in order to split tasks into different threads in order to speed up operations, example: ``` --object-threads=OBJECT_THREADS Number of threads to use for uploading full objects. Default is 10. --segment-threads=SEGMENT_THREADS Number of threads to use for uploading object segments. Default is 10. ``` the os_object module does not expose these and it would be an interesting feature to add in order to speed up long operations involving many files. ##### EXPECTED RESULTS ``` # Takes two minutes - os_object: cloud: mordred state: present name: huge_folder container: destfolder filename: huge_folder object_threads: 100 ``` ##### ACTUAL RESULTS ``` # Takes two hours - os_object: cloud: mordred state: present name: huge_folder container: destfolder filename: huge_folder ``` ",True,"RFE/Feature Idea: os_object - be able to specify the amount of threads used in operations - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME openstack/os_object ##### ANSIBLE VERSION Not relevant ##### CONFIGURATION Not relevant ##### OS / ENVIRONMENT Not relevant ##### SUMMARY swiftclient has built-in parameters in order to split tasks into different threads in order to speed up operations, example: ``` --object-threads=OBJECT_THREADS Number of threads to use for uploading full objects. Default is 10. --segment-threads=SEGMENT_THREADS Number of threads to use for uploading object segments. Default is 10. ``` the os_object module does not expose these and it would be an interesting feature to add in order to speed up long operations involving many files. ##### EXPECTED RESULTS ``` # Takes two minutes - os_object: cloud: mordred state: present name: huge_folder container: destfolder filename: huge_folder object_threads: 100 ``` ##### ACTUAL RESULTS ``` # Takes two hours - os_object: cloud: mordred state: present name: huge_folder container: destfolder filename: huge_folder ``` ",1,rfe feature idea os object be able to specify the amount of threads used in operations issue type feature idea component name openstack os object ansible version not relevant configuration not relevant os environment not relevant summary swiftclient has built in parameters in order to split tasks into different threads in order to speed up operations example object threads object threads number of threads to use for uploading full objects default is segment threads segment threads number of threads to use for uploading object segments default is the os object module does not expose these and it would be an interesting feature to add in order to speed up long operations involving many files expected results takes two minutes os object cloud mordred state present name huge folder container destfolder filename huge folder object threads actual results takes two hours os object cloud mordred state present name huge folder container destfolder filename huge folder ,1 1812,6577311778.0,IssuesEvent,2017-09-12 00:01:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Enable the use of JSON for junos_command,affects_2.2 feature_idea networking waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME network/junos/junos_command.py ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT --- JUNOS 15.1R3.6 built 2016-03-24 18:40:35 UTC ##### SUMMARY Enable format JSON for output from junos device running 14.2 or higher. From 14.2 onward, junos supports JSON ##### STEPS TO REPRODUCE ``` - name: show version json junos_command: host: ""{{ inventory_hostname }}"" commands: - ""show version"" format: json ``` Results: ``` ....... ""stdout"": [{""multi-routing-engine-results"": [{""multi-routing-engine-item"": [{""re-name"": [{""data"": ""fpc0""}], ""software-information"": [{""host-name"": [{""data"": ""netlab-sw01-a""}], ""junos-version"": [{""data"": ""15.1R3.6""}], ""package-information"": [{""comment"": [{""data"": ""JUNOS EX Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos""}]}, {""comment"": [{""data"": ""JUNOS FIPS mode utilities [15.1R3.6]""}], ""name"": [{""data"": ""fips-mode-powerpc""}]}, {""comment"": [{""data"": ""JUNOS Online Documentation [15.1R3.6]""}], ""name"": [{""data"": ""jdocs-ex""}]}, {""comment"": [{""data"": ""JUNOS EX 4200 Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos-ex-4200""}]}, {""comment"": [{""data"": ""JUNOS Web Management Platform Package [15.1R3.6]""}], ""name"": [{""data"": ""jweb-ex""}]}], ""product-model"": [{""data"": ""ex4200-24t""}], ""product-name"": [{""data"": ""ex4200-24t""}]}]}]}]}], ""stdout_lines"": [{""multi-routing-engine-results"": [{""multi-routing-engine-item"": [{""re-name"": [{""data"": ""fpc0""}], ""software-information"": [{""host-name"": [{""data"": ""netlab-sw01-a""}], ""junos-version"": [{""data"": ""15.1R3.6""}], ""package-information"": [{""comment"": [{""data"": ""JUNOS EX Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos""}]}, {""comment"": [{""data"": ""JUNOS FIPS mode utilities [15.1R3.6]""}], ""name"": [{""data"": ""fips-mode-powerpc""}]}, {""comment"": [{""data"": ""JUNOS Online Documentation [15.1R3.6]""}], ""name"": [{""data"": ""jdocs-ex""}]}, {""comment"": [{""data"": ""JUNOS EX 4200 Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos-ex-4200""}]}, {""comment"": [{""data"": ""JUNOS Web Management Platform Package [15.1R3.6]""}], ""name"": [{""data"": ""jweb-ex""}]}], ""product-model"": [{""data"": ""ex4200-24t""}], ""product-name"": [{""data"": ""ex4200-24t""}]}]}]}]}]} ``` ",True,"Enable the use of JSON for junos_command - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME network/junos/junos_command.py ##### ANSIBLE VERSION ``` ansible 2.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT --- JUNOS 15.1R3.6 built 2016-03-24 18:40:35 UTC ##### SUMMARY Enable format JSON for output from junos device running 14.2 or higher. From 14.2 onward, junos supports JSON ##### STEPS TO REPRODUCE ``` - name: show version json junos_command: host: ""{{ inventory_hostname }}"" commands: - ""show version"" format: json ``` Results: ``` ....... ""stdout"": [{""multi-routing-engine-results"": [{""multi-routing-engine-item"": [{""re-name"": [{""data"": ""fpc0""}], ""software-information"": [{""host-name"": [{""data"": ""netlab-sw01-a""}], ""junos-version"": [{""data"": ""15.1R3.6""}], ""package-information"": [{""comment"": [{""data"": ""JUNOS EX Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos""}]}, {""comment"": [{""data"": ""JUNOS FIPS mode utilities [15.1R3.6]""}], ""name"": [{""data"": ""fips-mode-powerpc""}]}, {""comment"": [{""data"": ""JUNOS Online Documentation [15.1R3.6]""}], ""name"": [{""data"": ""jdocs-ex""}]}, {""comment"": [{""data"": ""JUNOS EX 4200 Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos-ex-4200""}]}, {""comment"": [{""data"": ""JUNOS Web Management Platform Package [15.1R3.6]""}], ""name"": [{""data"": ""jweb-ex""}]}], ""product-model"": [{""data"": ""ex4200-24t""}], ""product-name"": [{""data"": ""ex4200-24t""}]}]}]}]}], ""stdout_lines"": [{""multi-routing-engine-results"": [{""multi-routing-engine-item"": [{""re-name"": [{""data"": ""fpc0""}], ""software-information"": [{""host-name"": [{""data"": ""netlab-sw01-a""}], ""junos-version"": [{""data"": ""15.1R3.6""}], ""package-information"": [{""comment"": [{""data"": ""JUNOS EX Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos""}]}, {""comment"": [{""data"": ""JUNOS FIPS mode utilities [15.1R3.6]""}], ""name"": [{""data"": ""fips-mode-powerpc""}]}, {""comment"": [{""data"": ""JUNOS Online Documentation [15.1R3.6]""}], ""name"": [{""data"": ""jdocs-ex""}]}, {""comment"": [{""data"": ""JUNOS EX 4200 Software Suite [15.1R3.6]""}], ""name"": [{""data"": ""junos-ex-4200""}]}, {""comment"": [{""data"": ""JUNOS Web Management Platform Package [15.1R3.6]""}], ""name"": [{""data"": ""jweb-ex""}]}], ""product-model"": [{""data"": ""ex4200-24t""}], ""product-name"": [{""data"": ""ex4200-24t""}]}]}]}]}]} ``` ",1,enable the use of json for junos command issue type feature idea component name network junos junos command py ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific junos built utc summary enable format json for output from junos device running or higher from onward junos supports json steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name show version json junos command host inventory hostname commands show version format json results stdout software information junos version package information name comment name comment name comment name comment name product model product name stdout lines software information junos version package information name comment name comment name comment name comment name product model product name ,1 1853,6577396614.0,IssuesEvent,2017-09-12 00:37:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Support addgroup in group-module,affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME group module ##### ANSIBLE VERSION ``` 2.0.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Target: Alpine Linux ##### SUMMARY 'groupadd' (provided by shadow) is currently in testing only. Ansible should use 'addgroup' (provided by Busybox) ##### STEPS TO REPRODUCE Install Alpine Linux ``` - group: name=blafoo state=present ``` ##### EXPECTED RESULTS Correct managing of group ##### ACTUAL RESULTS Failed because binary groupadd not found ",True,"Support addgroup in group-module - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME group module ##### ANSIBLE VERSION ``` 2.0.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Target: Alpine Linux ##### SUMMARY 'groupadd' (provided by shadow) is currently in testing only. Ansible should use 'addgroup' (provided by Busybox) ##### STEPS TO REPRODUCE Install Alpine Linux ``` - group: name=blafoo state=present ``` ##### EXPECTED RESULTS Correct managing of group ##### ACTUAL RESULTS Failed because binary groupadd not found ",1,support addgroup in group module issue type feature idea component name group module ansible version configuration os environment target alpine linux summary groupadd provided by shadow is currently in testing only ansible should use addgroup provided by busybox steps to reproduce install alpine linux group name blafoo state present expected results correct managing of group actual results failed because binary groupadd not found ,1 1066,4889234103.0,IssuesEvent,2016-11-18 09:31:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,include_role : troubleshooting passing variable,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/userdev/Documents/dlp-ansible/deploy_aqz/ansible.cfg configured module search path = ['library'] ``` (branch stable-2.2) ##### CONFIGURATION local roles and libraries location ##### OS / ENVIRONMENT Master: Ubuntu 16.04.2 Managed: Rhel 6.6 ##### SUMMARY role vars are not visible in vars attribute of include_role ##### STEPS TO REPRODUCE ``` - hosts: all gather_facts: True tasks: - include_role: name: ""role_test_a"" vars: a: ""test"" ``` role_test_a/tasks/main.yml : ``` --- - debug: var=a - debug: var=c - include_role: name: ""role_test_b"" vars: b: ""{{ c }}"" ``` role_test_a/vars/main.yml ``` --- c: ""{{ a }} dummy"" ``` role_test_b/tasks/main.yml ``` --- - debug: var=b ``` ##### EXPECTED RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""a"": ""test"" } TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""c"": ""test dummy"" } TASK [role_test_b : debug] ***************************************************** ok: [host] => { ""b"": ""test dummy"" } PLAY RECAP ********************************************************************* host : ok=4 changed=0 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""a"": ""test"" } TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""c"": ""test dummy"" } TASK [role_test_b : debug] ***************************************************** ok: [host] => { ""b"": ""VARIABLE IS NOT DEFINED!"" } PLAY RECAP ********************************************************************* host : ok=4 changed=0 unreachable=0 failed=0 ``` ",True,"include_role : troubleshooting passing variable - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/userdev/Documents/dlp-ansible/deploy_aqz/ansible.cfg configured module search path = ['library'] ``` (branch stable-2.2) ##### CONFIGURATION local roles and libraries location ##### OS / ENVIRONMENT Master: Ubuntu 16.04.2 Managed: Rhel 6.6 ##### SUMMARY role vars are not visible in vars attribute of include_role ##### STEPS TO REPRODUCE ``` - hosts: all gather_facts: True tasks: - include_role: name: ""role_test_a"" vars: a: ""test"" ``` role_test_a/tasks/main.yml : ``` --- - debug: var=a - debug: var=c - include_role: name: ""role_test_b"" vars: b: ""{{ c }}"" ``` role_test_a/vars/main.yml ``` --- c: ""{{ a }} dummy"" ``` role_test_b/tasks/main.yml ``` --- - debug: var=b ``` ##### EXPECTED RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""a"": ""test"" } TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""c"": ""test dummy"" } TASK [role_test_b : debug] ***************************************************** ok: [host] => { ""b"": ""test dummy"" } PLAY RECAP ********************************************************************* host : ok=4 changed=0 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [host] TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""a"": ""test"" } TASK [role_test_a : debug] ***************************************************** ok: [host] => { ""c"": ""test dummy"" } TASK [role_test_b : debug] ***************************************************** ok: [host] => { ""b"": ""VARIABLE IS NOT DEFINED!"" } PLAY RECAP ********************************************************************* host : ok=4 changed=0 unreachable=0 failed=0 ``` ",1,include role troubleshooting passing variable issue type bug report component name include role ansible version ansible config file home userdev documents dlp ansible deploy aqz ansible cfg configured module search path branch stable configuration local roles and libraries location os environment master ubuntu managed rhel summary role vars are not visible in vars attribute of include role steps to reproduce hosts all gather facts true tasks include role name role test a vars a test role test a tasks main yml debug var a debug var c include role name role test b vars b c role test a vars main yml c a dummy role test b tasks main yml debug var b expected results play task ok task ok a test task ok c test dummy task ok b test dummy play recap host ok changed unreachable failed actual results play task ok task ok a test task ok c test dummy task ok b variable is not defined play recap host ok changed unreachable failed ,1 1676,6574105887.0,IssuesEvent,2017-09-11 11:30:47,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,user module: add parameter for --disabled-password option,affects_2.2 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME user ##### ANSIBLE VERSION 2.2.0.0 ##### SUMMARY Sometimes it can be useful to create a user with the flag --disabled-password, instead of just not setting a password. ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS Taken from: http://stackoverflow.com/questions/39013796/create-user-with-option-disabled-password-by-ansible/39018859 Comparison of adduser --disabled-password test1 and - user: name=test2 state=present: ``` # grep test /etc/shadow test1:*:17031:0:99999:7::: test2:!:17031:0:99999:7::: # passwd -S test1 test1 L 08/18/2016 0 99999 7 -1 # passwd -S test2 test2 L 08/18/2016 0 99999 7 -1 ``` In one case the password is prefixed with **!** and in the other **\***. The exclamation mark makes the account locked, and can prevent logging in for hardened ssh configuration. Ideally we would be able to create an account that isn't locker and that doesn't have a password by setting a flag: user: name=test2 state=present disabled-password=true",True,"user module: add parameter for --disabled-password option - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME user ##### ANSIBLE VERSION 2.2.0.0 ##### SUMMARY Sometimes it can be useful to create a user with the flag --disabled-password, instead of just not setting a password. ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS Taken from: http://stackoverflow.com/questions/39013796/create-user-with-option-disabled-password-by-ansible/39018859 Comparison of adduser --disabled-password test1 and - user: name=test2 state=present: ``` # grep test /etc/shadow test1:*:17031:0:99999:7::: test2:!:17031:0:99999:7::: # passwd -S test1 test1 L 08/18/2016 0 99999 7 -1 # passwd -S test2 test2 L 08/18/2016 0 99999 7 -1 ``` In one case the password is prefixed with **!** and in the other **\***. The exclamation mark makes the account locked, and can prevent logging in for hardened ssh configuration. Ideally we would be able to create an account that isn't locker and that doesn't have a password by setting a flag: user: name=test2 state=present disabled-password=true",1,user module add parameter for disabled password option issue type feature idea component name user ansible version summary sometimes it can be useful to create a user with the flag disabled password instead of just not setting a password steps to reproduce expected results taken from comparison of adduser disabled password and user name state present grep test etc shadow passwd s l passwd s l in one case the password is prefixed with and in the other the exclamation mark makes the account locked and can prevent logging in for hardened ssh configuration ideally we would be able to create an account that isn t locker and that doesn t have a password by setting a flag user name state present disabled password true,1 1809,6576169387.0,IssuesEvent,2017-09-11 18:47:00,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git module always fails on update if submodules are checked out at different commit,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` (current `devel` branch) ##### CONFIGURATION ##### OS / ENVIRONMENT Host: CentOS 7, Target: CentOS 7 ##### SUMMARY If a submodule's checked out commit is different than the SHA-1 commited to the super-repository, an update attempt of the super-repository using the git module always fails with `Local modifications exist`, even if `force=yes` was given. ##### STEPS TO REPRODUCE ###### Preconditions - Repo `repo.git`, which contains submodules, has already been cloned - At least one submodule has been checked out at a different commit than commited to the repo `repo.git`; i.e. the instance of the checked out repo is ""dirty"" and `git status` shows something like: ``` # On branch master # Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded. # (use ""git pull"" to update your local branch) # # Changes not staged for commit: # (use ""git add ..."" to update what will be committed) # (use ""git checkout -- ..."" to discard changes in working directory) # # modified: (new commits) ``` ###### Example playbook to reproduce ``` - hosts: all gather_facts: False tasks: - name: checkout repo git: repo: user@host:/var/lib/git/repo.git dest: /tmp/repo.git accept_hostkey: yes ssh_opts: ""-o StrictHostKeyChecking=no"" force: yes update: yes ``` ##### EXPECTED RESULTS The super-repository should be updated and submodules shall be checkedout to the stored SHA-1 reference within the super-repository. ##### ACTUAL RESULTS ``` fatal: [cali]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""module_stderr"": ""OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /home/mf/.ssh/config\r\ndebug1: /home/mf/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 25086\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to cali closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_rYIDTj/ansible_module_git.py\"", line 1022, in \r\n main()\r\n File \""/tmp/ansible_rYIDTj/ansible_module_git.py\"", line 973, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",True,"git module always fails on update if submodules are checked out at different commit - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` (current `devel` branch) ##### CONFIGURATION ##### OS / ENVIRONMENT Host: CentOS 7, Target: CentOS 7 ##### SUMMARY If a submodule's checked out commit is different than the SHA-1 commited to the super-repository, an update attempt of the super-repository using the git module always fails with `Local modifications exist`, even if `force=yes` was given. ##### STEPS TO REPRODUCE ###### Preconditions - Repo `repo.git`, which contains submodules, has already been cloned - At least one submodule has been checked out at a different commit than commited to the repo `repo.git`; i.e. the instance of the checked out repo is ""dirty"" and `git status` shows something like: ``` # On branch master # Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded. # (use ""git pull"" to update your local branch) # # Changes not staged for commit: # (use ""git add ..."" to update what will be committed) # (use ""git checkout -- ..."" to discard changes in working directory) # # modified: (new commits) ``` ###### Example playbook to reproduce ``` - hosts: all gather_facts: False tasks: - name: checkout repo git: repo: user@host:/var/lib/git/repo.git dest: /tmp/repo.git accept_hostkey: yes ssh_opts: ""-o StrictHostKeyChecking=no"" force: yes update: yes ``` ##### EXPECTED RESULTS The super-repository should be updated and submodules shall be checkedout to the stored SHA-1 reference within the super-repository. ##### ACTUAL RESULTS ``` fatal: [cali]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""module_stderr"": ""OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /home/mf/.ssh/config\r\ndebug1: /home/mf/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 25086\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to cali closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_rYIDTj/ansible_module_git.py\"", line 1022, in \r\n main()\r\n File \""/tmp/ansible_rYIDTj/ansible_module_git.py\"", line 973, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",1,git module always fails on update if submodules are checked out at different commit issue type bug report component name git ansible version ansible current devel branch configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific host centos target centos summary if a submodule s checked out commit is different than the sha commited to the super repository an update attempt of the super repository using the git module always fails with local modifications exist even if force yes was given steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used preconditions repo repo git which contains submodules has already been cloned at least one submodule has been checked out at a different commit than commited to the repo repo git i e the instance of the checked out repo is dirty and git status shows something like on branch master your branch is behind origin master by commits and can be fast forwarded use git pull to update your local branch changes not staged for commit use git add to update what will be committed use git checkout to discard changes in working directory modified new commits example playbook to reproduce hosts all gather facts false tasks name checkout repo git repo user host var lib git repo git dest tmp repo git accept hostkey yes ssh opts o stricthostkeychecking no force yes update yes expected results the super repository should be updated and submodules shall be checkedout to the stored sha reference within the super repository actual results fatal failed changed false failed true invocation module name git module stderr openssh openssl fips feb r reading configuration data home mf ssh config r home mf ssh config line applying options for r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to cali closed r n module stdout traceback most recent call last r n file tmp ansible ryidtj ansible module git py line in r n main r n file tmp ansible ryidtj ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure ,1 1799,6575913527.0,IssuesEvent,2017-09-11 17:48:49,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Mount With Complex Password Hangs Forever,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mount ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes were made to ansible configuration. ##### OS / ENVIRONMENT ubuntu 12.04 (target host) ubuntu 14.04 (ansible controller) ##### SUMMARY When a mount password is provided that has unacceptable characters, fstab is not parsed properly which results in /bin/mount prompting for a password. This causes the mount command to hang silently forever. ##### STEPS TO REPRODUCE We're mounting an NFS drive, which (eventually) uses AD as the auth mechanism. I don't believe that contributes to the issue. The whole issue seems to be that /bin/mount is asking for stdin. ``` - name: setup NFS host hosts: nfs_host vars: mount_password: 'MyPassword/[""HasFunnyCharacters' tasks: - name: mount nfs share mount: name: ""/mnt/myshare"" src: ""//myshare/somedir"" fstype: ""cifs"" opts: ""domain=mydomain,user=myuser,password={{ mount_password }}"" state: mounted ``` ##### EXPECTED RESULTS I believe that mount should timeout or die rather quickly if input is requested but unavailable. I realize there are various other mount options to use to avoid this scenario, but it would be nice if the mount module handled the input request in some elegant way and fail out. ##### ACTUAL RESULTS Mount will hang forever when `/bin/mount /mnt/myshare` is called. ",True,"Mount With Complex Password Hangs Forever - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mount ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes were made to ansible configuration. ##### OS / ENVIRONMENT ubuntu 12.04 (target host) ubuntu 14.04 (ansible controller) ##### SUMMARY When a mount password is provided that has unacceptable characters, fstab is not parsed properly which results in /bin/mount prompting for a password. This causes the mount command to hang silently forever. ##### STEPS TO REPRODUCE We're mounting an NFS drive, which (eventually) uses AD as the auth mechanism. I don't believe that contributes to the issue. The whole issue seems to be that /bin/mount is asking for stdin. ``` - name: setup NFS host hosts: nfs_host vars: mount_password: 'MyPassword/[""HasFunnyCharacters' tasks: - name: mount nfs share mount: name: ""/mnt/myshare"" src: ""//myshare/somedir"" fstype: ""cifs"" opts: ""domain=mydomain,user=myuser,password={{ mount_password }}"" state: mounted ``` ##### EXPECTED RESULTS I believe that mount should timeout or die rather quickly if input is requested but unavailable. I realize there are various other mount options to use to avoid this scenario, but it would be nice if the mount module handled the input request in some elegant way and fail out. ##### ACTUAL RESULTS Mount will hang forever when `/bin/mount /mnt/myshare` is called. ",1,mount with complex password hangs forever issue type bug report component name mount ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes were made to ansible configuration os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu target host ubuntu ansible controller summary when a mount password is provided that has unacceptable characters fstab is not parsed properly which results in bin mount prompting for a password this causes the mount command to hang silently forever steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used we re mounting an nfs drive which eventually uses ad as the auth mechanism i don t believe that contributes to the issue the whole issue seems to be that bin mount is asking for stdin name setup nfs host hosts nfs host vars mount password mypassword hasfunnycharacters tasks name mount nfs share mount name mnt myshare src myshare somedir fstype cifs opts domain mydomain user myuser password mount password state mounted expected results i believe that mount should timeout or die rather quickly if input is requested but unavailable i realize there are various other mount options to use to avoid this scenario but it would be nice if the mount module handled the input request in some elegant way and fail out actual results mount will hang forever when bin mount mnt myshare is called ,1 1559,6572254405.0,IssuesEvent,2017-09-11 00:39:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,unable to set mysql root password the first time,affects_1.9 bug_report waiting_on_maintainer,"Issue Type: Bug Report Component Name: mysql_user Ansible Version: ``` ansible 1.9.4 (stable-1.9 2d914d4b1e) last updated 2015/11/16 21:35:26 (GMT +100) lib/ansible/modules/core: (stable-1.9 4b65a4a8b5) last updated 2015/11/16 21:35:36 (GMT +100) lib/ansible/modules/extras: (stable-1.9 29c3e31a92) last updated 2015/11/16 21:35:50 (GMT +100) configured module search path = None ``` Ansible Configuration: ``` [defaults] inventory = ./hosts host_key_checking=False forks = 40 display_skipped_hosts=False retry_files_save_path = ~/.ansible-retry log_path=/var/log/ansible.log gathering = smart # pipelining=True ``` Environment: ``` lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.3 LTS Release: 14.04 Codename: trusty ``` Summary: Im unable to set the root password for mysql when deploying it with ansible. Steps To Reproduce: 1. run [this](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-mysq-yml) playbook once and get [this](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-first_run-txt) result. 2. ssh to the server and try mysql without any luck. ([log](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-first_sql_attempt-txt)) 3. re-run playbook [log](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-second_run-txt) 4. Run sql with success ([log](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-second_sql-attempt-txt)) Expected Results: Be able to start a mysql root shell at step 2 (and not at step 4). Actual Results: Mysql password was not set until i applied the same playbook twice. ",True,"unable to set mysql root password the first time - Issue Type: Bug Report Component Name: mysql_user Ansible Version: ``` ansible 1.9.4 (stable-1.9 2d914d4b1e) last updated 2015/11/16 21:35:26 (GMT +100) lib/ansible/modules/core: (stable-1.9 4b65a4a8b5) last updated 2015/11/16 21:35:36 (GMT +100) lib/ansible/modules/extras: (stable-1.9 29c3e31a92) last updated 2015/11/16 21:35:50 (GMT +100) configured module search path = None ``` Ansible Configuration: ``` [defaults] inventory = ./hosts host_key_checking=False forks = 40 display_skipped_hosts=False retry_files_save_path = ~/.ansible-retry log_path=/var/log/ansible.log gathering = smart # pipelining=True ``` Environment: ``` lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.3 LTS Release: 14.04 Codename: trusty ``` Summary: Im unable to set the root password for mysql when deploying it with ansible. Steps To Reproduce: 1. run [this](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-mysq-yml) playbook once and get [this](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-first_run-txt) result. 2. ssh to the server and try mysql without any luck. ([log](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-first_sql_attempt-txt)) 3. re-run playbook [log](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-second_run-txt) 4. Run sql with success ([log](https://gist.github.com/serialdoom/427b246ab52b229da52b#file-second_sql-attempt-txt)) Expected Results: Be able to start a mysql root shell at step 2 (and not at step 4). Actual Results: Mysql password was not set until i applied the same playbook twice. ",1,unable to set mysql root password the first time issue type bug report component name mysql user ansible version ansible stable last updated gmt lib ansible modules core stable last updated gmt lib ansible modules extras stable last updated gmt configured module search path none ansible configuration inventory hosts host key checking false forks display skipped hosts false retry files save path ansible retry log path var log ansible log gathering smart pipelining true environment lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename trusty summary im unable to set the root password for mysql when deploying it with ansible steps to reproduce run playbook once and get result ssh to the server and try mysql without any luck re run playbook run sql with success expected results be able to start a mysql root shell at step and not at step actual results mysql password was not set until i applied the same playbook twice ,1 956,4702099560.0,IssuesEvent,2016-10-13 00:16:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Document EOS Min Version,affects_2.2 bug_report in progress networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_template, eos_config ##### ANSIBLE VERSION 2.2 ##### SUMMARY The latest implementation in devel for 2.2 uses a feature in EOS, session-config. This feature was introduced in EOS 4.15.0F. Therefore, the module documentation should clearly indicate this, otherwise you end up with: ``` TASK [Arista EOS Base Configuration] ******************************************* An exception occurred during task execution. To see the full traceback, use -vvv. The error was: localhost(config-s-ansibl)# fatal: [172.16.130.201]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\"", line 213, in \n main()\n File \""/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\"", line 205, in main\n commit=True)\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/netcfg.py\"", line 58, in load_config\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\"", line 78, in load_config\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\"", line 102, in diff_config\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: show session-config diffs\r\n% Invalid input\r\nlocalhost(config-s-ansibl)#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/ansi/base_configuration.retry PLAY RECAP ********************************************************************* 172.16.130.201 : ok=0 changed=0 unreachable=0 failed=1 ``` I'd also like to recommend looking for ``invalid input`` and maybe offering a better message which hints to the user that their version of EOS is too old.",True,"Document EOS Min Version - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_template, eos_config ##### ANSIBLE VERSION 2.2 ##### SUMMARY The latest implementation in devel for 2.2 uses a feature in EOS, session-config. This feature was introduced in EOS 4.15.0F. Therefore, the module documentation should clearly indicate this, otherwise you end up with: ``` TASK [Arista EOS Base Configuration] ******************************************* An exception occurred during task execution. To see the full traceback, use -vvv. The error was: localhost(config-s-ansibl)# fatal: [172.16.130.201]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\"", line 213, in \n main()\n File \""/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\"", line 205, in main\n commit=True)\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/netcfg.py\"", line 58, in load_config\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\"", line 78, in load_config\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\"", line 102, in diff_config\n File \""/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/shell.py\"", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: show session-config diffs\r\n% Invalid input\r\nlocalhost(config-s-ansibl)#\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/ansi/base_configuration.retry PLAY RECAP ********************************************************************* 172.16.130.201 : ok=0 changed=0 unreachable=0 failed=1 ``` I'd also like to recommend looking for ``invalid input`` and maybe offering a better message which hints to the user that their version of EOS is too old.",1,document eos min version issue type bug report component name eos template eos config ansible version summary the latest implementation in devel for uses a feature in eos session config this feature was introduced in eos therefore the module documentation should clearly indicate this otherwise you end up with task an exception occurred during task execution to see the full traceback use vvv the error was localhost config s ansibl fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible ansible module eos template py line in n main n file tmp ansible ansible module eos template py line in main n commit true n file tmp ansible ansible modlib zip ansible module utils netcfg py line in load config n file tmp ansible ansible modlib zip ansible module utils eos py line in load config n file tmp ansible ansible modlib zip ansible module utils eos py line in diff config n file tmp ansible ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response show session config diffs r n invalid input r nlocalhost config s ansibl n module stdout msg module failure to retry use limit ansi base configuration retry play recap ok changed unreachable failed i d also like to recommend looking for invalid input and maybe offering a better message which hints to the user that their version of eos is too old ,1 764,4364165033.0,IssuesEvent,2016-08-03 05:07:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service: wrong logic or processing services/scale,bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT Centos 7, docker-1.11.2, docker-compose 1.7.1 ##### SUMMARY docker_service launches services, not mentioned in service parameter ##### STEPS TO REPRODUCE ``` [root@ba0 test]# cat docker-compose.yml version: ""2"" services: centos1: image: centos:centos7 command: sleep 1d centos2: image: centos:centos7 command: sleep 1d centos3: image: centos:centos7 command: sleep 1d ansible -m docker_service -c local localhost -a ""project_src=/tmp/test/ services=centos2 scale={'centos1':2} state=present"" ``` ##### EXPECTED RESULTS I expect that only 'centos2' service launched ##### ACTUAL RESULTS Both centos2 and scaled centos1 launched. ``` localhost | SUCCESS => { ""ansible_facts"": { ""centos1"": { ""test_centos1_1"": { ""cmd"": [ ""sleep"", ""1d"" ], ""image"": ""centos:centos7"", ""labels"": { ""build-date"": ""2016-06-02"", ""com.docker.compose.config-hash"": ""0d0b17f36bed81ed3668740e739677660e8f65e393cb9c3f57c3d0887c16ecc0"", ""com.docker.compose.container-number"": ""1"", ""com.docker.compose.oneoff"": ""False"", ""com.docker.compose.project"": ""test"", ""com.docker.compose.service"": ""centos1"", ""com.docker.compose.version"": ""1.7.1"", ""license"": ""GPLv2"", ""name"": ""CentOS Base Image"", ""vendor"": ""CentOS"" }, ""networks"": { ""test_default"": { ""IPAddress"": ""172.20.0.3"", ""IPPrefixLen"": 16, ""aliases"": [ ""centos1"", ""cba992c086f6"" ], ""globalIPv6"": """", ""globalIPv6PrefixLen"": 0, ""links"": null, ""macAddress"": ""02:42:ac:14:00:03"" } }, ""state"": { ""running"": true, ""status"": ""running"" } }, ""test_centos1_2"": { ""cmd"": [ ""sleep"", ""1d"" ], ""image"": ""centos:centos7"", ""labels"": { ""build-date"": ""2016-06-02"", ""com.docker.compose.config-hash"": ""0d0b17f36bed81ed3668740e739677660e8f65e393cb9c3f57c3d0887c16ecc0"", ""com.docker.compose.container-number"": ""2"", ""com.docker.compose.oneoff"": ""False"", ""com.docker.compose.project"": ""test"", ""com.docker.compose.service"": ""centos1"", ""com.docker.compose.version"": ""1.7.1"", ""license"": ""GPLv2"", ""name"": ""CentOS Base Image"", ""vendor"": ""CentOS"" }, ""networks"": { ""test_default"": { ""IPAddress"": ""172.20.0.4"", ""IPPrefixLen"": 16, ""aliases"": [ ""30249f7e074e"", ""centos1"" ], ""globalIPv6"": """", ""globalIPv6PrefixLen"": 0, ""links"": null, ""macAddress"": ""02:42:ac:14:00:04"" } }, ""state"": { ""running"": true, ""status"": ""running"" } } }, ""centos2"": { ""test_centos2_1"": { ""cmd"": [ ""sleep"", ""1d"" ], ""image"": ""centos:centos7"", ""labels"": { ""build-date"": ""2016-06-02"", ""com.docker.compose.config-hash"": ""0d0b17f36bed81ed3668740e739677660e8f65e393cb9c3f57c3d0887c16ecc0"", ""com.docker.compose.container-number"": ""1"", ""com.docker.compose.oneoff"": ""False"", ""com.docker.compose.project"": ""test"", ""com.docker.compose.service"": ""centos2"", ""com.docker.compose.version"": ""1.7.1"", ""license"": ""GPLv2"", ""name"": ""CentOS Base Image"", ""vendor"": ""CentOS"" }, ""networks"": { ""test_default"": { ""IPAddress"": ""172.20.0.2"", ""IPPrefixLen"": 16, ""aliases"": [ ""81c91cb6007b"", ""centos2"" ], ""globalIPv6"": """", ""globalIPv6PrefixLen"": 0, ""links"": null, ""macAddress"": ""02:42:ac:14:00:02"" } }, ""state"": { ""running"": true, ""status"": ""running"" } } }, ""centos3"": {} }, ""changed"": true } ``` ",True,"docker_service: wrong logic or processing services/scale - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT Centos 7, docker-1.11.2, docker-compose 1.7.1 ##### SUMMARY docker_service launches services, not mentioned in service parameter ##### STEPS TO REPRODUCE ``` [root@ba0 test]# cat docker-compose.yml version: ""2"" services: centos1: image: centos:centos7 command: sleep 1d centos2: image: centos:centos7 command: sleep 1d centos3: image: centos:centos7 command: sleep 1d ansible -m docker_service -c local localhost -a ""project_src=/tmp/test/ services=centos2 scale={'centos1':2} state=present"" ``` ##### EXPECTED RESULTS I expect that only 'centos2' service launched ##### ACTUAL RESULTS Both centos2 and scaled centos1 launched. ``` localhost | SUCCESS => { ""ansible_facts"": { ""centos1"": { ""test_centos1_1"": { ""cmd"": [ ""sleep"", ""1d"" ], ""image"": ""centos:centos7"", ""labels"": { ""build-date"": ""2016-06-02"", ""com.docker.compose.config-hash"": ""0d0b17f36bed81ed3668740e739677660e8f65e393cb9c3f57c3d0887c16ecc0"", ""com.docker.compose.container-number"": ""1"", ""com.docker.compose.oneoff"": ""False"", ""com.docker.compose.project"": ""test"", ""com.docker.compose.service"": ""centos1"", ""com.docker.compose.version"": ""1.7.1"", ""license"": ""GPLv2"", ""name"": ""CentOS Base Image"", ""vendor"": ""CentOS"" }, ""networks"": { ""test_default"": { ""IPAddress"": ""172.20.0.3"", ""IPPrefixLen"": 16, ""aliases"": [ ""centos1"", ""cba992c086f6"" ], ""globalIPv6"": """", ""globalIPv6PrefixLen"": 0, ""links"": null, ""macAddress"": ""02:42:ac:14:00:03"" } }, ""state"": { ""running"": true, ""status"": ""running"" } }, ""test_centos1_2"": { ""cmd"": [ ""sleep"", ""1d"" ], ""image"": ""centos:centos7"", ""labels"": { ""build-date"": ""2016-06-02"", ""com.docker.compose.config-hash"": ""0d0b17f36bed81ed3668740e739677660e8f65e393cb9c3f57c3d0887c16ecc0"", ""com.docker.compose.container-number"": ""2"", ""com.docker.compose.oneoff"": ""False"", ""com.docker.compose.project"": ""test"", ""com.docker.compose.service"": ""centos1"", ""com.docker.compose.version"": ""1.7.1"", ""license"": ""GPLv2"", ""name"": ""CentOS Base Image"", ""vendor"": ""CentOS"" }, ""networks"": { ""test_default"": { ""IPAddress"": ""172.20.0.4"", ""IPPrefixLen"": 16, ""aliases"": [ ""30249f7e074e"", ""centos1"" ], ""globalIPv6"": """", ""globalIPv6PrefixLen"": 0, ""links"": null, ""macAddress"": ""02:42:ac:14:00:04"" } }, ""state"": { ""running"": true, ""status"": ""running"" } } }, ""centos2"": { ""test_centos2_1"": { ""cmd"": [ ""sleep"", ""1d"" ], ""image"": ""centos:centos7"", ""labels"": { ""build-date"": ""2016-06-02"", ""com.docker.compose.config-hash"": ""0d0b17f36bed81ed3668740e739677660e8f65e393cb9c3f57c3d0887c16ecc0"", ""com.docker.compose.container-number"": ""1"", ""com.docker.compose.oneoff"": ""False"", ""com.docker.compose.project"": ""test"", ""com.docker.compose.service"": ""centos2"", ""com.docker.compose.version"": ""1.7.1"", ""license"": ""GPLv2"", ""name"": ""CentOS Base Image"", ""vendor"": ""CentOS"" }, ""networks"": { ""test_default"": { ""IPAddress"": ""172.20.0.2"", ""IPPrefixLen"": 16, ""aliases"": [ ""81c91cb6007b"", ""centos2"" ], ""globalIPv6"": """", ""globalIPv6PrefixLen"": 0, ""links"": null, ""macAddress"": ""02:42:ac:14:00:02"" } }, ""state"": { ""running"": true, ""status"": ""running"" } } }, ""centos3"": {} }, ""changed"": true } ``` ",1,docker service wrong logic or processing services scale issue type bug report component name docker service ansible version ansible configuration nothing special os environment centos docker docker compose summary docker service launches services not mentioned in service parameter steps to reproduce cat docker compose yml version services image centos command sleep image centos command sleep image centos command sleep ansible m docker service c local localhost a project src tmp test services scale state present expected results i expect that only service launched actual results both and scaled launched localhost success ansible facts test cmd sleep image centos labels build date com docker compose config hash com docker compose container number com docker compose oneoff false com docker compose project test com docker compose service com docker compose version license name centos base image vendor centos networks test default ipaddress ipprefixlen aliases links null macaddress ac state running true status running test cmd sleep image centos labels build date com docker compose config hash com docker compose container number com docker compose oneoff false com docker compose project test com docker compose service com docker compose version license name centos base image vendor centos networks test default ipaddress ipprefixlen aliases links null macaddress ac state running true status running test cmd sleep image centos labels build date com docker compose config hash com docker compose container number com docker compose oneoff false com docker compose project test com docker compose service com docker compose version license name centos base image vendor centos networks test default ipaddress ipprefixlen aliases links null macaddress ac state running true status running changed true ,1 1712,6574459614.0,IssuesEvent,2017-09-11 12:58:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Service Module 'enable' does not work as expected with SysV scripts on systemd Systems,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Service Module ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION ``` [defaults] gathering = smart host_key_checking = False inventory = /etc/ansible/hosts library = /usr/share/ansible log_path = /var/log/ansible/ansible.log retry_files_enabled = False stdout_callback = skippy [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=1800s transport = ssh ``` ##### OS / ENVIRONMENT Ansible Controller (AMZN Linux): ``` Linux ip-10-27-0-198 4.4.23-31.54.amzn1.x86_64 #1 SMP Tue Oct 18 22:02:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` Remote Target (RHEL7): ``` Linux ip-10-27-5-86.a730491757039.amazonaws.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When using the Service module against a RHEL7 instance, it appears the service module is not working as intended when using SysV scripts. This was working in 2.1.2.0. It produces a strange behaviour, in that if the service exists in `/etc/init.d/$SERVICE_NAME` but not in a `chkconfig --list` output, it will fail to enable the service, as it cannot find it by the name. See actual results below for details analysis. To note, this works fine on Amazon Linux (which uses SysV by default). ##### STEPS TO REPRODUCE - Place a SysV init script in /etc/init.d - Run the service module with the service name, and `enabled=yes` ##### EXPECTED RESULTS It is expected the SysV script should be enabled. ##### ACTUAL RESULTS # On the Target instance ``` [root@ip-10-27-5-86 ~]# ls -la /etc/init.d/ total 60 drwxr-xr-x. 2 root root 4096 Nov 3 18:21 . drwxr-xr-x. 10 root root 4096 Nov 3 18:14 .. -rwxr-xr-x. 1 root root 318 Aug 21 2015 choose_repo -rw-r--r--. 1 root root 15131 Sep 12 06:47 functions -rwxrwxr-x. 1 root root 5056 Nov 3 18:21 logstash -rwxr-xr-x. 1 root root 2989 Sep 12 06:47 netconsole -rwxr-xr-x. 1 root root 6643 Sep 12 06:47 network -rw-r--r--. 1 root root 1160 Oct 7 09:56 README -rwxr-xr-x. 1 root root 1868 Aug 21 2015 rh-cloud-firstboot -rwxr-xr-x. 1 root root 2437 Jun 26 2015 rhns [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # From the Controller Try to enable the service ``` [A730491757039\joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a ""name=logstash enabled=yes"" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `"" && echo ansible-tmp-1478215841.03-177671857836162=""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `"" ) && sleep 0'""'""'' <10.27.5.77> PUT /tmp/tmpjS20Hc TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py && sleep 0'""'""'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-uarnsjvhnpksvmndiulirdbysmztqcrt; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `"" && echo ansible-tmp-1478215841.49-3431261890578=""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `"" ) && sleep 0'""'""'' <10.27.5.77> PUT /tmp/tmpmO92Mk TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py && sleep 0'""'""'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-ltdddlwqyhmziebpurcpqjdscqyerinz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' 10.27.5.77 | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""daemon_reload"": false, ""enabled"": true, ""masked"": null, ""name"": ""logstash"", ""state"": null, ""user"": false } }, ""msg"": ""Could not find the requested service \""'logstash'\"": "" } ``` # On the Target Instance, run chkconfig logstash on, disable it, so we can re-run the service module ``` [root@ip-10-27-5-86 ~]# chkconfig logstash on [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@ip-10-27-5-86 ~]# chkconfig logstash off [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:off 3:off 4:off 5:off 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # On the Controller, run the service module ``` [joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a ""name=logstash enabled=yes"" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `"" && echo ansible-tmp-1478214073.21-133810435447209=""` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `"" ) && sleep 0'""'""'' <10.27.5.86> PUT /tmp/tmpPM0eIY TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py && sleep 0'""'""'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-dpsldcejhnvsmslcmfjivastwsjrebmu; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `"" && echo ansible-tmp-1478214082.63-24802563064003=""` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `"" ) && sleep 0'""'""'' <10.27.5.86> PUT /tmp/tmpxeCT1Q TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py && sleep 0'""'""'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-hvoafitpgnzzmujuqatnpatrpfpvuhim; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' 10.27.5.86 | SUCCESS => { ""changed"": true, ""enabled"": true, ""invocation"": { ""module_args"": { ""daemon_reload"": false, ""enabled"": true, ""masked"": null, ""name"": ""logstash"", ""state"": null, ""user"": false } }, ""name"": ""logstash"", ""status"": { ""ActiveEnterTimestampMonotonic"": ""0"", ""ActiveExitTimestampMonotonic"": ""0"", ""ActiveState"": ""inactive"", ""After"": ""systemd-journald.socket remote-fs.target system.slice basic.target"", ""AllowIsolate"": ""no"", ""AssertResult"": ""no"", ""AssertTimestampMonotonic"": ""0"", ""Before"": ""shutdown.target"", ""BlockIOAccounting"": ""no"", ""BlockIOWeight"": ""18446744073709551615"", ""CPUAccounting"": ""no"", ""CPUQuotaPerSecUSec"": ""infinity"", ""CPUSchedulingPolicy"": ""0"", ""CPUSchedulingPriority"": ""0"", ""CPUSchedulingResetOnFork"": ""no"", ""CPUShares"": ""18446744073709551615"", ""CanIsolate"": ""no"", ""CanReload"": ""no"", ""CanStart"": ""yes"", ""CanStop"": ""yes"", ""CapabilityBoundingSet"": ""18446744073709551615"", ""ConditionResult"": ""no"", ""ConditionTimestampMonotonic"": ""0"", ""Conflicts"": ""shutdown.target"", ""ControlPID"": ""0"", ""DefaultDependencies"": ""yes"", ""Delegate"": ""no"", ""Description"": ""LSB: Starts Logstash as a daemon."", ""DevicePolicy"": ""auto"", ""Documentation"": ""man:systemd-sysv-generator(8)"", ""ExecMainCode"": ""0"", ""ExecMainExitTimestampMonotonic"": ""0"", ""ExecMainPID"": ""0"", ""ExecMainStartTimestampMonotonic"": ""0"", ""ExecMainStatus"": ""0"", ""ExecStart"": ""{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash start ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }"", ""ExecStop"": ""{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash stop ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }"", ""FailureAction"": ""none"", ""FileDescriptorStoreMax"": ""0"", ""FragmentPath"": ""/run/systemd/generator.late/logstash.service"", ""GuessMainPID"": ""no"", ""IOScheduling"": ""0"", ""Id"": ""logstash.service"", ""IgnoreOnIsolate"": ""no"", ""IgnoreOnSnapshot"": ""no"", ""IgnoreSIGPIPE"": ""no"", ""InactiveEnterTimestampMonotonic"": ""0"", ""InactiveExitTimestampMonotonic"": ""0"", ""JobTimeoutAction"": ""none"", ""JobTimeoutUSec"": ""0"", ""KillMode"": ""process"", ""KillSignal"": ""15"", ""LimitAS"": ""18446744073709551615"", ""LimitCORE"": ""18446744073709551615"", ""LimitCPU"": ""18446744073709551615"", ""LimitDATA"": ""18446744073709551615"", ""LimitFSIZE"": ""18446744073709551615"", ""LimitLOCKS"": ""18446744073709551615"", ""LimitMEMLOCK"": ""65536"", ""LimitMSGQUEUE"": ""819200"", ""LimitNICE"": ""0"", ""LimitNOFILE"": ""4096"", ""LimitNPROC"": ""31146"", ""LimitRSS"": ""18446744073709551615"", ""LimitRTPRIO"": ""0"", ""LimitRTTIME"": ""18446744073709551615"", ""LimitSIGPENDING"": ""31146"", ""LimitSTACK"": ""18446744073709551615"", ""LoadState"": ""loaded"", ""MainPID"": ""0"", ""MemoryAccounting"": ""no"", ""MemoryCurrent"": ""18446744073709551615"", ""MemoryLimit"": ""18446744073709551615"", ""MountFlags"": ""0"", ""Names"": ""logstash.service"", ""NeedDaemonReload"": ""no"", ""Nice"": ""0"", ""NoNewPrivileges"": ""no"", ""NonBlocking"": ""no"", ""NotifyAccess"": ""none"", ""OOMScoreAdjust"": ""0"", ""OnFailureJobMode"": ""replace"", ""PermissionsStartOnly"": ""no"", ""PrivateDevices"": ""no"", ""PrivateNetwork"": ""no"", ""PrivateTmp"": ""no"", ""ProtectHome"": ""no"", ""ProtectSystem"": ""no"", ""RefuseManualStart"": ""no"", ""RefuseManualStop"": ""no"", ""RemainAfterExit"": ""yes"", ""Requires"": ""basic.target"", ""Restart"": ""no"", ""RestartUSec"": ""100ms"", ""Result"": ""success"", ""RootDirectoryStartOnly"": ""no"", ""RuntimeDirectoryMode"": ""0755"", ""SameProcessGroup"": ""no"", ""SecureBits"": ""0"", ""SendSIGHUP"": ""no"", ""SendSIGKILL"": ""yes"", ""Slice"": ""system.slice"", ""SourcePath"": ""/etc/rc.d/init.d/logstash"", ""StandardError"": ""inherit"", ""StandardInput"": ""null"", ""StandardOutput"": ""journal"", ""StartLimitAction"": ""none"", ""StartLimitBurst"": ""5"", ""StartLimitInterval"": ""10000000"", ""StartupBlockIOWeight"": ""18446744073709551615"", ""StartupCPUShares"": ""18446744073709551615"", ""StatusErrno"": ""0"", ""StopWhenUnneeded"": ""no"", ""SubState"": ""dead"", ""SyslogLevelPrefix"": ""yes"", ""SyslogPriority"": ""30"", ""SystemCallErrorNumber"": ""0"", ""TTYReset"": ""no"", ""TTYVHangup"": ""no"", ""TTYVTDisallocate"": ""no"", ""TimeoutStartUSec"": ""5min"", ""TimeoutStopUSec"": ""5min"", ""TimerSlackNSec"": ""50000"", ""Transient"": ""no"", ""Type"": ""forking"", ""UMask"": ""0022"", ""UnitFilePreset"": ""disabled"", ""UnitFileState"": ""bad"", ""Wants"": ""system.slice"", ""WatchdogTimestampMonotonic"": ""0"", ""WatchdogUSec"": ""0"" } } ``` ",True,"Service Module 'enable' does not work as expected with SysV scripts on systemd Systems - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Service Module ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION ``` [defaults] gathering = smart host_key_checking = False inventory = /etc/ansible/hosts library = /usr/share/ansible log_path = /var/log/ansible/ansible.log retry_files_enabled = False stdout_callback = skippy [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=1800s transport = ssh ``` ##### OS / ENVIRONMENT Ansible Controller (AMZN Linux): ``` Linux ip-10-27-0-198 4.4.23-31.54.amzn1.x86_64 #1 SMP Tue Oct 18 22:02:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` Remote Target (RHEL7): ``` Linux ip-10-27-5-86.a730491757039.amazonaws.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When using the Service module against a RHEL7 instance, it appears the service module is not working as intended when using SysV scripts. This was working in 2.1.2.0. It produces a strange behaviour, in that if the service exists in `/etc/init.d/$SERVICE_NAME` but not in a `chkconfig --list` output, it will fail to enable the service, as it cannot find it by the name. See actual results below for details analysis. To note, this works fine on Amazon Linux (which uses SysV by default). ##### STEPS TO REPRODUCE - Place a SysV init script in /etc/init.d - Run the service module with the service name, and `enabled=yes` ##### EXPECTED RESULTS It is expected the SysV script should be enabled. ##### ACTUAL RESULTS # On the Target instance ``` [root@ip-10-27-5-86 ~]# ls -la /etc/init.d/ total 60 drwxr-xr-x. 2 root root 4096 Nov 3 18:21 . drwxr-xr-x. 10 root root 4096 Nov 3 18:14 .. -rwxr-xr-x. 1 root root 318 Aug 21 2015 choose_repo -rw-r--r--. 1 root root 15131 Sep 12 06:47 functions -rwxrwxr-x. 1 root root 5056 Nov 3 18:21 logstash -rwxr-xr-x. 1 root root 2989 Sep 12 06:47 netconsole -rwxr-xr-x. 1 root root 6643 Sep 12 06:47 network -rw-r--r--. 1 root root 1160 Oct 7 09:56 README -rwxr-xr-x. 1 root root 1868 Aug 21 2015 rh-cloud-firstboot -rwxr-xr-x. 1 root root 2437 Jun 26 2015 rhns [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # From the Controller Try to enable the service ``` [A730491757039\joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a ""name=logstash enabled=yes"" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `"" && echo ansible-tmp-1478215841.03-177671857836162=""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `"" ) && sleep 0'""'""'' <10.27.5.77> PUT /tmp/tmpjS20Hc TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py && sleep 0'""'""'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-uarnsjvhnpksvmndiulirdbysmztqcrt; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `"" && echo ansible-tmp-1478215841.49-3431261890578=""` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `"" ) && sleep 0'""'""'' <10.27.5.77> PUT /tmp/tmpmO92Mk TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py && sleep 0'""'""'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-ltdddlwqyhmziebpurcpqjdscqyerinz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' 10.27.5.77 | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""daemon_reload"": false, ""enabled"": true, ""masked"": null, ""name"": ""logstash"", ""state"": null, ""user"": false } }, ""msg"": ""Could not find the requested service \""'logstash'\"": "" } ``` # On the Target Instance, run chkconfig logstash on, disable it, so we can re-run the service module ``` [root@ip-10-27-5-86 ~]# chkconfig logstash on [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@ip-10-27-5-86 ~]# chkconfig logstash off [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:off 3:off 4:off 5:off 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # On the Controller, run the service module ``` [joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a ""name=logstash enabled=yes"" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `"" && echo ansible-tmp-1478214073.21-133810435447209=""` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `"" ) && sleep 0'""'""'' <10.27.5.86> PUT /tmp/tmpPM0eIY TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py && sleep 0'""'""'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-dpsldcejhnvsmslcmfjivastwsjrebmu; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `"" && echo ansible-tmp-1478214082.63-24802563064003=""` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `"" ) && sleep 0'""'""'' <10.27.5.86> PUT /tmp/tmpxeCT1Q TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '""'""'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py && sleep 0'""'""'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile=""/home/joeskyyy/.ssh/joeskyyy.pem""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-hvoafitpgnzzmujuqatnpatrpfpvuhim; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' 10.27.5.86 | SUCCESS => { ""changed"": true, ""enabled"": true, ""invocation"": { ""module_args"": { ""daemon_reload"": false, ""enabled"": true, ""masked"": null, ""name"": ""logstash"", ""state"": null, ""user"": false } }, ""name"": ""logstash"", ""status"": { ""ActiveEnterTimestampMonotonic"": ""0"", ""ActiveExitTimestampMonotonic"": ""0"", ""ActiveState"": ""inactive"", ""After"": ""systemd-journald.socket remote-fs.target system.slice basic.target"", ""AllowIsolate"": ""no"", ""AssertResult"": ""no"", ""AssertTimestampMonotonic"": ""0"", ""Before"": ""shutdown.target"", ""BlockIOAccounting"": ""no"", ""BlockIOWeight"": ""18446744073709551615"", ""CPUAccounting"": ""no"", ""CPUQuotaPerSecUSec"": ""infinity"", ""CPUSchedulingPolicy"": ""0"", ""CPUSchedulingPriority"": ""0"", ""CPUSchedulingResetOnFork"": ""no"", ""CPUShares"": ""18446744073709551615"", ""CanIsolate"": ""no"", ""CanReload"": ""no"", ""CanStart"": ""yes"", ""CanStop"": ""yes"", ""CapabilityBoundingSet"": ""18446744073709551615"", ""ConditionResult"": ""no"", ""ConditionTimestampMonotonic"": ""0"", ""Conflicts"": ""shutdown.target"", ""ControlPID"": ""0"", ""DefaultDependencies"": ""yes"", ""Delegate"": ""no"", ""Description"": ""LSB: Starts Logstash as a daemon."", ""DevicePolicy"": ""auto"", ""Documentation"": ""man:systemd-sysv-generator(8)"", ""ExecMainCode"": ""0"", ""ExecMainExitTimestampMonotonic"": ""0"", ""ExecMainPID"": ""0"", ""ExecMainStartTimestampMonotonic"": ""0"", ""ExecMainStatus"": ""0"", ""ExecStart"": ""{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash start ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }"", ""ExecStop"": ""{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash stop ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }"", ""FailureAction"": ""none"", ""FileDescriptorStoreMax"": ""0"", ""FragmentPath"": ""/run/systemd/generator.late/logstash.service"", ""GuessMainPID"": ""no"", ""IOScheduling"": ""0"", ""Id"": ""logstash.service"", ""IgnoreOnIsolate"": ""no"", ""IgnoreOnSnapshot"": ""no"", ""IgnoreSIGPIPE"": ""no"", ""InactiveEnterTimestampMonotonic"": ""0"", ""InactiveExitTimestampMonotonic"": ""0"", ""JobTimeoutAction"": ""none"", ""JobTimeoutUSec"": ""0"", ""KillMode"": ""process"", ""KillSignal"": ""15"", ""LimitAS"": ""18446744073709551615"", ""LimitCORE"": ""18446744073709551615"", ""LimitCPU"": ""18446744073709551615"", ""LimitDATA"": ""18446744073709551615"", ""LimitFSIZE"": ""18446744073709551615"", ""LimitLOCKS"": ""18446744073709551615"", ""LimitMEMLOCK"": ""65536"", ""LimitMSGQUEUE"": ""819200"", ""LimitNICE"": ""0"", ""LimitNOFILE"": ""4096"", ""LimitNPROC"": ""31146"", ""LimitRSS"": ""18446744073709551615"", ""LimitRTPRIO"": ""0"", ""LimitRTTIME"": ""18446744073709551615"", ""LimitSIGPENDING"": ""31146"", ""LimitSTACK"": ""18446744073709551615"", ""LoadState"": ""loaded"", ""MainPID"": ""0"", ""MemoryAccounting"": ""no"", ""MemoryCurrent"": ""18446744073709551615"", ""MemoryLimit"": ""18446744073709551615"", ""MountFlags"": ""0"", ""Names"": ""logstash.service"", ""NeedDaemonReload"": ""no"", ""Nice"": ""0"", ""NoNewPrivileges"": ""no"", ""NonBlocking"": ""no"", ""NotifyAccess"": ""none"", ""OOMScoreAdjust"": ""0"", ""OnFailureJobMode"": ""replace"", ""PermissionsStartOnly"": ""no"", ""PrivateDevices"": ""no"", ""PrivateNetwork"": ""no"", ""PrivateTmp"": ""no"", ""ProtectHome"": ""no"", ""ProtectSystem"": ""no"", ""RefuseManualStart"": ""no"", ""RefuseManualStop"": ""no"", ""RemainAfterExit"": ""yes"", ""Requires"": ""basic.target"", ""Restart"": ""no"", ""RestartUSec"": ""100ms"", ""Result"": ""success"", ""RootDirectoryStartOnly"": ""no"", ""RuntimeDirectoryMode"": ""0755"", ""SameProcessGroup"": ""no"", ""SecureBits"": ""0"", ""SendSIGHUP"": ""no"", ""SendSIGKILL"": ""yes"", ""Slice"": ""system.slice"", ""SourcePath"": ""/etc/rc.d/init.d/logstash"", ""StandardError"": ""inherit"", ""StandardInput"": ""null"", ""StandardOutput"": ""journal"", ""StartLimitAction"": ""none"", ""StartLimitBurst"": ""5"", ""StartLimitInterval"": ""10000000"", ""StartupBlockIOWeight"": ""18446744073709551615"", ""StartupCPUShares"": ""18446744073709551615"", ""StatusErrno"": ""0"", ""StopWhenUnneeded"": ""no"", ""SubState"": ""dead"", ""SyslogLevelPrefix"": ""yes"", ""SyslogPriority"": ""30"", ""SystemCallErrorNumber"": ""0"", ""TTYReset"": ""no"", ""TTYVHangup"": ""no"", ""TTYVTDisallocate"": ""no"", ""TimeoutStartUSec"": ""5min"", ""TimeoutStopUSec"": ""5min"", ""TimerSlackNSec"": ""50000"", ""Transient"": ""no"", ""Type"": ""forking"", ""UMask"": ""0022"", ""UnitFilePreset"": ""disabled"", ""UnitFileState"": ""bad"", ""Wants"": ""system.slice"", ""WatchdogTimestampMonotonic"": ""0"", ""WatchdogUSec"": ""0"" } } ``` ",1,service module enable does not work as expected with sysv scripts on systemd systems issue type bug report component name service module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path configuration gathering smart host key checking false inventory etc ansible hosts library usr share ansible log path var log ansible ansible log retry files enabled false stdout callback skippy ssh args o controlmaster auto o controlpersist transport ssh os environment ansible controller amzn linux linux ip smp tue oct utc gnu linux remote target linux ip amazonaws com smp thu oct edt gnu linux summary when using the service module against a instance it appears the service module is not working as intended when using sysv scripts this was working in it produces a strange behaviour in that if the service exists in etc init d service name but not in a chkconfig list output it will fail to enable the service as it cannot find it by the name see actual results below for details analysis to note this works fine on amazon linux which uses sysv by default steps to reproduce place a sysv init script in etc init d run the service module with the service name and enabled yes expected results it is expected the sysv script should be enabled actual results on the target instance ls la etc init d total drwxr xr x root root nov drwxr xr x root root nov rwxr xr x root root aug choose repo rw r r root root sep functions rwxrwxr x root root nov logstash rwxr xr x root root sep netconsole rwxr xr x root root sep network rw r r root root oct readme rwxr xr x root root aug rh cloud firstboot rwxr xr x root root jun rhns chkconfig list note this output shows sysv services only and does not include native systemd services sysv configuration data might be overridden by native systemd configuration if you want to list systemd services use systemctl list unit files to see services enabled on particular target use systemctl list dependencies choose repo off off on on on on off netconsole off off off off off off off network off off on on on on off rh cloud firstboot off off off off off off off rhnsd off off on on on on off from the controller try to enable the service ansible i hosts all m service a name logstash enabled yes vvvv become using etc ansible ansible cfg as config file loading callback plugin minimal of type stdout from usr local lib site packages ansible plugins callback init pyc using module file usr local lib site packages ansible modules core system setup py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp setup py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp setup py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success uarnsjvhnpksvmndiulirdbysmztqcrt usr bin python home user ansible tmp ansible tmp setup py rm rf home user ansible tmp ansible tmp dev null sleep running systemd using module file usr local lib site packages ansible modules core system systemd py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp systemd py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp systemd py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success ltdddlwqyhmziebpurcpqjdscqyerinz usr bin python home user ansible tmp ansible tmp systemd py rm rf home user ansible tmp ansible tmp dev null sleep failed changed false failed true invocation module args daemon reload false enabled true masked null name logstash state null user false msg could not find the requested service logstash on the target instance run chkconfig logstash on disable it so we can re run the service module chkconfig logstash on chkconfig list note this output shows sysv services only and does not include native systemd services sysv configuration data might be overridden by native systemd configuration if you want to list systemd services use systemctl list unit files to see services enabled on particular target use systemctl list dependencies choose repo off off on on on on off logstash off off on on on on off netconsole off off off off off off off network off off on on on on off rh cloud firstboot off off off off off off off rhnsd off off on on on on off chkconfig logstash off chkconfig list note this output shows sysv services only and does not include native systemd services sysv configuration data might be overridden by native systemd configuration if you want to list systemd services use systemctl list unit files to see services enabled on particular target use systemctl list dependencies choose repo off off on on on on off logstash off off off off off off off netconsole off off off off off off off network off off on on on on off rh cloud firstboot off off off off off off off rhnsd off off on on on on off on the controller run the service module ansible i hosts all m service a name logstash enabled yes vvvv become using etc ansible ansible cfg as config file loading callback plugin minimal of type stdout from usr local lib site packages ansible plugins callback init pyc using module file usr local lib site packages ansible modules core system setup py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp setup py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp setup py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success dpsldcejhnvsmslcmfjivastwsjrebmu usr bin python home user ansible tmp ansible tmp setup py rm rf home user ansible tmp ansible tmp dev null sleep running systemd using module file usr local lib site packages ansible modules core system systemd py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp systemd py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp systemd py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success hvoafitpgnzzmujuqatnpatrpfpvuhim usr bin python home user ansible tmp ansible tmp systemd py rm rf home user ansible tmp ansible tmp dev null sleep success changed true enabled true invocation module args daemon reload false enabled true masked null name logstash state null user false name logstash status activeentertimestampmonotonic activeexittimestampmonotonic activestate inactive after systemd journald socket remote fs target system slice basic target allowisolate no assertresult no asserttimestampmonotonic before shutdown target blockioaccounting no blockioweight cpuaccounting no cpuquotapersecusec infinity cpuschedulingpolicy cpuschedulingpriority cpuschedulingresetonfork no cpushares canisolate no canreload no canstart yes canstop yes capabilityboundingset conditionresult no conditiontimestampmonotonic conflicts shutdown target controlpid defaultdependencies yes delegate no description lsb starts logstash as a daemon devicepolicy auto documentation man systemd sysv generator execmaincode execmainexittimestampmonotonic execmainpid execmainstarttimestampmonotonic execmainstatus execstart path etc rc d init d logstash argv etc rc d init d logstash start ignore errors no start time stop time pid code null status execstop path etc rc d init d logstash argv etc rc d init d logstash stop ignore errors no start time stop time pid code null status failureaction none filedescriptorstoremax fragmentpath run systemd generator late logstash service guessmainpid no ioscheduling id logstash service ignoreonisolate no ignoreonsnapshot no ignoresigpipe no inactiveentertimestampmonotonic inactiveexittimestampmonotonic jobtimeoutaction none jobtimeoutusec killmode process killsignal limitas limitcore limitcpu limitdata limitfsize limitlocks limitmemlock limitmsgqueue limitnice limitnofile limitnproc limitrss limitrtprio limitrttime limitsigpending limitstack loadstate loaded mainpid memoryaccounting no memorycurrent memorylimit mountflags names logstash service needdaemonreload no nice nonewprivileges no nonblocking no notifyaccess none oomscoreadjust onfailurejobmode replace permissionsstartonly no privatedevices no privatenetwork no privatetmp no protecthome no protectsystem no refusemanualstart no refusemanualstop no remainafterexit yes requires basic target restart no restartusec result success rootdirectorystartonly no runtimedirectorymode sameprocessgroup no securebits sendsighup no sendsigkill yes slice system slice sourcepath etc rc d init d logstash standarderror inherit standardinput null standardoutput journal startlimitaction none startlimitburst startlimitinterval startupblockioweight startupcpushares statuserrno stopwhenunneeded no substate dead sysloglevelprefix yes syslogpriority systemcallerrornumber ttyreset no ttyvhangup no ttyvtdisallocate no timeoutstartusec timeoutstopusec timerslacknsec transient no type forking umask unitfilepreset disabled unitfilestate bad wants system slice watchdogtimestampmonotonic watchdogusec ,1 1699,6574376520.0,IssuesEvent,2017-09-11 12:39:45,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_asg does not work when using replace_all_instances option,affects_2.2 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OSX, local_action ##### SUMMARY When using the option replace_all_instances within below task: ``` - name: Configure Auto Scaling Groups local_action: module: ec2_asg name: ""{{ item.name }}"" desired_capacity: ""{{ item.desired_capacity }}"" launch_config_name: ""{{ item.launch_config }}"" replace_all_instances: yes region: sa-east-1 become: false with_items: - { name: ""{{ ondemand_asg_name }}"", desired_capacity: ""{{ ondemand_desired_capacity }}"", launch_config: ""{{ ondemand_lc }}"" } #- { name: ""{{ spot_asg_name }}"", desired_capacity: ""{{ spot_desired_capacity }}"", launch_config: ""{{ spot_lc }}"" } register: asg_output when: configasg is defined ``` I can see using aws console that the group size is indeed changing and that the launch_configuration was replaced as expected, but while the module is waiting for the newly launched instances to be InService it fails with the following message and leaves the asg with unwanted minimum, maximum and desired values as well as the effect of using replace_all_instances is not reached since there is old instances left running with older launch_configuration. ``` Using module file /Library/Python/2.7/site-packages/ansible/modules/core/cloud/amazon/ec2_asg.py EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478796730.65-629465686899 `"" && echo ansible-tmp-1478796730.65-629465686899=""` echo $HOME/.ansible/tmp/ansible-tmp-1478796730.65-629465686899 `"" ) && sleep 0' PUT /var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/tmpS2gR2m TO /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ec2_asg.py EXEC /bin/sh -c 'chmod u+x /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ec2_asg.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ec2_asg.py; rm -rf ""/Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 875, in main() File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 869, in main replace_changed, asg_properties=replace(connection, module) File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 644, in replace break_early, desired_size, term_instances = terminate_batch(connection, module, i, instances, False) File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 730, in terminate_batch as_group.update() File ""/Library/Python/2.7/site-packages/boto/ec2/autoscale/group.py"", line 282, in update return self.connection._update_group('UpdateAutoScalingGroup', self) File ""/Library/Python/2.7/site-packages/boto/ec2/autoscale/__init__.py"", line 183, in _update_group return self.get_object(op, params, Request) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1208, in get_object raise self.ResponseError(response.status, response.reason, body) boto.exception.BotoServerError: BotoServerError: 400 Bad Request Sender MalformedInput 3968c89c-a766-11e6-8907-3df0a5528d6b failed: [127.0.0.1 -> localhost] (item={u'desired_capacity': u'2', u'launch_config': u'zupme-gateway-autoscaling-2.25.0-spot-2016-11-10-10_32', u'name': u'zupme-gateway-autoscaling-spot'}) => { ""failed"": true, ""invocation"": { ""module_name"": ""ec2_asg"" }, ""item"": { ""desired_capacity"": ""2"", ""launch_config"": ""zupme-gateway-autoscaling-2.25.0-spot-2016-11-10-10_32"", ""name"": ""zupme-gateway-autoscaling-spot"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 875, in \n main()\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 869, in main\n replace_changed, asg_properties=replace(connection, module)\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 644, in replace\n break_early, desired_size, term_instances = terminate_batch(connection, module, i, instances, False)\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 730, in terminate_batch\n as_group.update()\n File \""/Library/Python/2.7/site-packages/boto/ec2/autoscale/group.py\"", line 282, in update\n return self.connection._update_group('UpdateAutoScalingGroup', self)\n File \""/Library/Python/2.7/site-packages/boto/ec2/autoscale/__init__.py\"", line 183, in _update_group\n return self.get_object(op, params, Request)\n File \""/Library/Python/2.7/site-packages/boto/connection.py\"", line 1208, in get_object\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.BotoServerError: BotoServerError: 400 Bad Request\n\n \n Sender\n MalformedInput\n \n 3968c89c-a766-11e6-8907-3df0a5528d6b\n\n\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ##### STEPS TO REPRODUCE Create a ec2_asg task with replace_all_instances and try to run it. ##### EXPECTED RESULTS launch_configuration changed and instances with older launch_configuration should be terminated. ##### ACTUAL RESULTS ec2_asg leaves the asg with unwanted minimum, maximum and desired values as well as the effect of using replace_all_instances is not reached since there are old instances left running with older launch_configuration. ",True,"ec2_asg does not work when using replace_all_instances option - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT OSX, local_action ##### SUMMARY When using the option replace_all_instances within below task: ``` - name: Configure Auto Scaling Groups local_action: module: ec2_asg name: ""{{ item.name }}"" desired_capacity: ""{{ item.desired_capacity }}"" launch_config_name: ""{{ item.launch_config }}"" replace_all_instances: yes region: sa-east-1 become: false with_items: - { name: ""{{ ondemand_asg_name }}"", desired_capacity: ""{{ ondemand_desired_capacity }}"", launch_config: ""{{ ondemand_lc }}"" } #- { name: ""{{ spot_asg_name }}"", desired_capacity: ""{{ spot_desired_capacity }}"", launch_config: ""{{ spot_lc }}"" } register: asg_output when: configasg is defined ``` I can see using aws console that the group size is indeed changing and that the launch_configuration was replaced as expected, but while the module is waiting for the newly launched instances to be InService it fails with the following message and leaves the asg with unwanted minimum, maximum and desired values as well as the effect of using replace_all_instances is not reached since there is old instances left running with older launch_configuration. ``` Using module file /Library/Python/2.7/site-packages/ansible/modules/core/cloud/amazon/ec2_asg.py EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478796730.65-629465686899 `"" && echo ansible-tmp-1478796730.65-629465686899=""` echo $HOME/.ansible/tmp/ansible-tmp-1478796730.65-629465686899 `"" ) && sleep 0' PUT /var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/tmpS2gR2m TO /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ec2_asg.py EXEC /bin/sh -c 'chmod u+x /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ec2_asg.py && sleep 0' EXEC /bin/sh -c '/usr/bin/python /Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/ec2_asg.py; rm -rf ""/Users/underguiz/.ansible/tmp/ansible-tmp-1478796730.65-629465686899/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 875, in main() File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 869, in main replace_changed, asg_properties=replace(connection, module) File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 644, in replace break_early, desired_size, term_instances = terminate_batch(connection, module, i, instances, False) File ""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py"", line 730, in terminate_batch as_group.update() File ""/Library/Python/2.7/site-packages/boto/ec2/autoscale/group.py"", line 282, in update return self.connection._update_group('UpdateAutoScalingGroup', self) File ""/Library/Python/2.7/site-packages/boto/ec2/autoscale/__init__.py"", line 183, in _update_group return self.get_object(op, params, Request) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1208, in get_object raise self.ResponseError(response.status, response.reason, body) boto.exception.BotoServerError: BotoServerError: 400 Bad Request Sender MalformedInput 3968c89c-a766-11e6-8907-3df0a5528d6b failed: [127.0.0.1 -> localhost] (item={u'desired_capacity': u'2', u'launch_config': u'zupme-gateway-autoscaling-2.25.0-spot-2016-11-10-10_32', u'name': u'zupme-gateway-autoscaling-spot'}) => { ""failed"": true, ""invocation"": { ""module_name"": ""ec2_asg"" }, ""item"": { ""desired_capacity"": ""2"", ""launch_config"": ""zupme-gateway-autoscaling-2.25.0-spot-2016-11-10-10_32"", ""name"": ""zupme-gateway-autoscaling-spot"" }, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 875, in \n main()\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 869, in main\n replace_changed, asg_properties=replace(connection, module)\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 644, in replace\n break_early, desired_size, term_instances = terminate_batch(connection, module, i, instances, False)\n File \""/var/folders/_n/vjpj21ld7nzdxl50d46w6bjm0000gn/T/ansible_2eKuRg/ansible_module_ec2_asg.py\"", line 730, in terminate_batch\n as_group.update()\n File \""/Library/Python/2.7/site-packages/boto/ec2/autoscale/group.py\"", line 282, in update\n return self.connection._update_group('UpdateAutoScalingGroup', self)\n File \""/Library/Python/2.7/site-packages/boto/ec2/autoscale/__init__.py\"", line 183, in _update_group\n return self.get_object(op, params, Request)\n File \""/Library/Python/2.7/site-packages/boto/connection.py\"", line 1208, in get_object\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.BotoServerError: BotoServerError: 400 Bad Request\n\n \n Sender\n MalformedInput\n \n 3968c89c-a766-11e6-8907-3df0a5528d6b\n\n\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"" } ``` ##### STEPS TO REPRODUCE Create a ec2_asg task with replace_all_instances and try to run it. ##### EXPECTED RESULTS launch_configuration changed and instances with older launch_configuration should be terminated. ##### ACTUAL RESULTS ec2_asg leaves the asg with unwanted minimum, maximum and desired values as well as the effect of using replace_all_instances is not reached since there are old instances left running with older launch_configuration. ",1, asg does not work when using replace all instances option issue type bug report component name asg ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment osx local action summary when using the option replace all instances within below task name configure auto scaling groups local action module asg name item name desired capacity item desired capacity launch config name item launch config replace all instances yes region sa east become false with items name ondemand asg name desired capacity ondemand desired capacity launch config ondemand lc name spot asg name desired capacity spot desired capacity launch config spot lc register asg output when configasg is defined i can see using aws console that the group size is indeed changing and that the launch configuration was replaced as expected but while the module is waiting for the newly launched instances to be inservice it fails with the following message and leaves the asg with unwanted minimum maximum and desired values as well as the effect of using replace all instances is not reached since there is old instances left running with older launch configuration using module file library python site packages ansible modules core cloud amazon asg py exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders n t to users underguiz ansible tmp ansible tmp asg py exec bin sh c chmod u x users underguiz ansible tmp ansible tmp users underguiz ansible tmp ansible tmp asg py sleep exec bin sh c usr bin python users underguiz ansible tmp ansible tmp asg py rm rf users underguiz ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file var folders n t ansible ansible module asg py line in main file var folders n t ansible ansible module asg py line in main replace changed asg properties replace connection module file var folders n t ansible ansible module asg py line in replace break early desired size term instances terminate batch connection module i instances false file var folders n t ansible ansible module asg py line in terminate batch as group update file library python site packages boto autoscale group py line in update return self connection update group updateautoscalinggroup self file library python site packages boto autoscale init py line in update group return self get object op params request file library python site packages boto connection py line in get object raise self responseerror response status response reason body boto exception botoservererror botoservererror bad request errorresponse xmlns sender malformedinput failed item u desired capacity u u launch config u zupme gateway autoscaling spot u name u zupme gateway autoscaling spot failed true invocation module name asg item desired capacity launch config zupme gateway autoscaling spot name zupme gateway autoscaling spot module stderr traceback most recent call last n file var folders n t ansible ansible module asg py line in n main n file var folders n t ansible ansible module asg py line in main n replace changed asg properties replace connection module n file var folders n t ansible ansible module asg py line in replace n break early desired size term instances terminate batch connection module i instances false n file var folders n t ansible ansible module asg py line in terminate batch n as group update n file library python site packages boto autoscale group py line in update n return self connection update group updateautoscalinggroup self n file library python site packages boto autoscale init py line in update group n return self get object op params request n file library python site packages boto connection py line in get object n raise self responseerror response status response reason body nboto exception botoservererror botoservererror bad request n n sender n malformedinput n n n n n module stdout msg module failure steps to reproduce create a asg task with replace all instances and try to run it expected results launch configuration changed and instances with older launch configuration should be terminated actual results asg leaves the asg with unwanted minimum maximum and desired values as well as the effect of using replace all instances is not reached since there are old instances left running with older launch configuration ,1 1868,6577492913.0,IssuesEvent,2017-09-12 01:17:51,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker: can't launch container with memory_limit=0,affects_2.0 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION default configuration ##### OS / ENVIRONMENT docker-1.10.3 ##### SUMMARY ansible module fails to launch containers with memory_limit=0 ##### STEPS TO REPRODUCE [root@js ~]# ansible -c local -m docker -a 'image=centos memory_limit=0' localhost ##### EXPECTED RESULTS Container launched without memory limit set ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Could not convert 0 to integer"" } ``` ",True,"docker: can't launch container with memory_limit=0 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION default configuration ##### OS / ENVIRONMENT docker-1.10.3 ##### SUMMARY ansible module fails to launch containers with memory_limit=0 ##### STEPS TO REPRODUCE [root@js ~]# ansible -c local -m docker -a 'image=centos memory_limit=0' localhost ##### EXPECTED RESULTS Container launched without memory limit set ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Could not convert 0 to integer"" } ``` ",1,docker can t launch container with memory limit issue type bug report component name docker ansible version ansible configuration default configuration os environment docker summary ansible module fails to launch containers with memory limit steps to reproduce ansible c local m docker a image centos memory limit localhost expected results container launched without memory limit set actual results localhost failed changed false failed true msg could not convert to integer ,1 1806,6575943409.0,IssuesEvent,2017-09-11 17:55:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"feature request: rhn_channel, and other rhn_* modules, should have validate_certs parameter",affects_2.1 feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME rhn_channel rhn_* ##### ANSIBLE VERSION ``` [snemirovsky@SN-WS-Fedora24 patch]$ ansible --version [WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting ansible 2.1.1.0 config file = /ansible/conf/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS excected my code to work ##### ACTUAL RESULTS ``` TASK [RHN_CHANNEL_configure : Configure Channel on Satellite] ****************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) ``` ",True,"feature request: rhn_channel, and other rhn_* modules, should have validate_certs parameter - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME rhn_channel rhn_* ##### ANSIBLE VERSION ``` [snemirovsky@SN-WS-Fedora24 patch]$ ansible --version [WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting ansible 2.1.1.0 config file = /ansible/conf/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS excected my code to work ##### ACTUAL RESULTS ``` TASK [RHN_CHANNEL_configure : Configure Channel on Satellite] ****************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) ``` ",1,feature request rhn channel and other rhn modules should have validate certs parameter issue type feature idea component name rhn channel rhn ansible version ansible version log file at var log ansible log is not writeable and we cannot create it aborting ansible config file ansible conf ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary steps to reproduce run the module agains rh satellite api url with self signed or expired cert expected results excected my code to work actual results task an exception occurred during task execution to see the full traceback use vvv the error was ssl sslerror certificate verify failed ssl c ,1 874,4540095870.0,IssuesEvent,2016-09-09 13:37:41,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,eos eapi failed commands,affects_2.1 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME networking/eos_command ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Running the eos_command 'show version' using the eapi transport works fine. When running the command 'show running-configuration section Et1' it returns a failure. Both work fine when using cli as the transport. Switch is an Arista 7150S running 4.16.7M ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` PLAYBOOK: test_arista_command.yml ********************************************** 1 plays in test_arista_command.yml PLAY [arista_test] ************************************************************* TASK [setup] ******************************************************************* <10.24.1.14> ESTABLISH LOCAL CONNECTION FOR USER: reynolds <10.24.1.14> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python && sleep 0' <10.24.1.13> ESTABLISH LOCAL CONNECTION FOR USER: reynolds <10.24.1.13> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python && sleep 0' ok: [myswitch] TASK [test_arista_command : test command] ************************************** task path: /home/reynolds/wc/cfg/ansible/roles/test_arista_command/tasks/main.yml:1 <10.24.1.13> ESTABLISH LOCAL CONNECTION FOR USER: reynolds <10.24.1.13> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python && sleep 0' fatal: [myswitch]: FAILED! => {""changed"": false, ""code"": 1003, ""commands"": [""show running-config section Et1""], ""data"": [{}, {""errors"": [""Command cannot be used over the API at this time. To see ASCII output, set format='text' in your request""]}], ""failed"": true, ""invocation"": {""module_args"": {""auth_pass"": null, ""authorize"": true, ""commands"": [""show running-config section Et1""], ""host"": ""10.24.1.13"", ""interval"": 1, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""retries"": 10, ""ssh_keyfile"": null, ""transport"": ""eapi"", ""url_password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""url_username"": ""ansible"", ""use_ssl"": false, ""username"": ""ansible"", ""waitfor"": null}, ""module_name"": ""eos_command""}, ""message"": ""CLI command 2 of 2 'show running-config section Et1' failed: unconverted command"", ""msg"": ""json-rpc error""} ``` ",True,"eos eapi failed commands - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME networking/eos_command ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Running the eos_command 'show version' using the eapi transport works fine. When running the command 'show running-configuration section Et1' it returns a failure. Both work fine when using cli as the transport. Switch is an Arista 7150S running 4.16.7M ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` PLAYBOOK: test_arista_command.yml ********************************************** 1 plays in test_arista_command.yml PLAY [arista_test] ************************************************************* TASK [setup] ******************************************************************* <10.24.1.14> ESTABLISH LOCAL CONNECTION FOR USER: reynolds <10.24.1.14> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python && sleep 0' <10.24.1.13> ESTABLISH LOCAL CONNECTION FOR USER: reynolds <10.24.1.13> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python && sleep 0' ok: [myswitch] TASK [test_arista_command : test command] ************************************** task path: /home/reynolds/wc/cfg/ansible/roles/test_arista_command/tasks/main.yml:1 <10.24.1.13> ESTABLISH LOCAL CONNECTION FOR USER: reynolds <10.24.1.13> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python && sleep 0' fatal: [myswitch]: FAILED! => {""changed"": false, ""code"": 1003, ""commands"": [""show running-config section Et1""], ""data"": [{}, {""errors"": [""Command cannot be used over the API at this time. To see ASCII output, set format='text' in your request""]}], ""failed"": true, ""invocation"": {""module_args"": {""auth_pass"": null, ""authorize"": true, ""commands"": [""show running-config section Et1""], ""host"": ""10.24.1.13"", ""interval"": 1, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""provider"": null, ""retries"": 10, ""ssh_keyfile"": null, ""transport"": ""eapi"", ""url_password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""url_username"": ""ansible"", ""use_ssl"": false, ""username"": ""ansible"", ""waitfor"": null}, ""module_name"": ""eos_command""}, ""message"": ""CLI command 2 of 2 'show running-config section Et1' failed: unconverted command"", ""msg"": ""json-rpc error""} ``` ",1,eos eapi failed commands issue type bug report component name networking eos command ansible version ansible config file home admin ansible ansible cfg configured module search path default w o overrides configuration os environment n a summary running the eos command show version using the eapi transport works fine when running the command show running configuration section it returns a failure both work fine when using cli as the transport switch is an arista running steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results playbook test arista command yml plays in test arista command yml play task establish local connection for user reynolds exec bin sh c lang c lc all c lc messages c usr bin python sleep establish local connection for user reynolds exec bin sh c lang c lc all c lc messages c usr bin python sleep ok task task path home reynolds wc cfg ansible roles test arista command tasks main yml establish local connection for user reynolds exec bin sh c lang c lc all c lc messages c usr bin python sleep fatal failed changed false code commands data failed true invocation module args auth pass null authorize true commands host interval password value specified in no log parameter port null provider null retries ssh keyfile null transport eapi url password value specified in no log parameter url username ansible use ssl false username ansible waitfor null module name eos command message cli command of show running config section failed unconverted command msg json rpc error ,1 1728,6574824683.0,IssuesEvent,2017-09-11 14:12:25,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Passphrase protected private-key require to enter passphrase several times on one task to one host,affects_2.1 docs_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report / Documentation Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION all default ##### OS / ENVIRONMENT Use _Putty_ to _Centos 7.x_ via _Vagrant_ on _VM VirtualBox_ at _Windows10_ ##### SUMMARY I have passphrase-protected-ssh-private-key for access the private git repo. I copy this key to target host but every time i run ansible-git-task it's asked me passphrase six (!) times for every single host. Yes i know that one ansible git command translate into several git commands. It was not so obviously but afer some investigation time i found it. So my next step was to use some of forwarding practices. And cannot do this at all. 8( Not helped: ssh-agent + ssh-add ansible.cfg with ssh_args = -o ForwardAgent=true run playbook w/ or w/o sudo ##### STEPS TO REPRODUCE 1. Phassphrase private ssh key and private git repo (for example on bitbucket) 2. Create user (not root!) on remote host with this protected private key 3. Run ansible playbook command from control machine Ansible git task example: ``` - name: checkout repo git: repo=ssh://git@altssh.bitbucket.org:443/user/repo.git version=""{{ git_branch }}"" dest=""{{ dir_app }}"" accept_hostkey=""yes"" become: yes become_user: ""{{ user.login }}"" tags: ['app-update', 'sandbox'] ``` Playbook command example ``` [vagrant@localhost ~]$ ansible-playbook /vagrant/provisioning/sandbox/dd-apps-sandboxes.yml -i /vagrant/provisioning/dd-hosts.txt --limit=""brutto.dev"" --tags=""app-update"" PLAY [brutto.dev] ************************************************************** TASK [setup] ******************************************************************* ok: [brutto.dev] TASK [../roles/app : checkout repo] ******************************************** Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': ok: [brutto.dev] PLAY RECAP ********************************************************************* brutto.dev : ok=2 changed=0 unreachable=0 failed=0 ``` ##### EXPECTED RESULTS I want enter passphrase one time or never if i use some forwarding ##### ACTUAL RESULTS Every time passphrase prompted six times! ",True,"Passphrase protected private-key require to enter passphrase several times on one task to one host - ##### ISSUE TYPE - Bug Report / Documentation Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION all default ##### OS / ENVIRONMENT Use _Putty_ to _Centos 7.x_ via _Vagrant_ on _VM VirtualBox_ at _Windows10_ ##### SUMMARY I have passphrase-protected-ssh-private-key for access the private git repo. I copy this key to target host but every time i run ansible-git-task it's asked me passphrase six (!) times for every single host. Yes i know that one ansible git command translate into several git commands. It was not so obviously but afer some investigation time i found it. So my next step was to use some of forwarding practices. And cannot do this at all. 8( Not helped: ssh-agent + ssh-add ansible.cfg with ssh_args = -o ForwardAgent=true run playbook w/ or w/o sudo ##### STEPS TO REPRODUCE 1. Phassphrase private ssh key and private git repo (for example on bitbucket) 2. Create user (not root!) on remote host with this protected private key 3. Run ansible playbook command from control machine Ansible git task example: ``` - name: checkout repo git: repo=ssh://git@altssh.bitbucket.org:443/user/repo.git version=""{{ git_branch }}"" dest=""{{ dir_app }}"" accept_hostkey=""yes"" become: yes become_user: ""{{ user.login }}"" tags: ['app-update', 'sandbox'] ``` Playbook command example ``` [vagrant@localhost ~]$ ansible-playbook /vagrant/provisioning/sandbox/dd-apps-sandboxes.yml -i /vagrant/provisioning/dd-hosts.txt --limit=""brutto.dev"" --tags=""app-update"" PLAY [brutto.dev] ************************************************************** TASK [setup] ******************************************************************* ok: [brutto.dev] TASK [../roles/app : checkout repo] ******************************************** Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': Enter passphrase for key '/home/brutto/.ssh/id_rsa': ok: [brutto.dev] PLAY RECAP ********************************************************************* brutto.dev : ok=2 changed=0 unreachable=0 failed=0 ``` ##### EXPECTED RESULTS I want enter passphrase one time or never if i use some forwarding ##### ACTUAL RESULTS Every time passphrase prompted six times! ",1,passphrase protected private key require to enter passphrase several times on one task to one host issue type bug report documentation report component name git ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration all default os environment use putty to centos x via vagrant on vm virtualbox at summary i have passphrase protected ssh private key for access the private git repo i copy this key to target host but every time i run ansible git task it s asked me passphrase six times for every single host yes i know that one ansible git command translate into several git commands it was not so obviously but afer some investigation time i found it so my next step was to use some of forwarding practices and cannot do this at all not helped ssh agent ssh add ansible cfg with ssh args o forwardagent true run playbook w or w o sudo steps to reproduce phassphrase private ssh key and private git repo for example on bitbucket create user not root on remote host with this protected private key run ansible playbook command from control machine ansible git task example name checkout repo git repo ssh git altssh bitbucket org user repo git version git branch dest dir app accept hostkey yes become yes become user user login tags playbook command example ansible playbook vagrant provisioning sandbox dd apps sandboxes yml i vagrant provisioning dd hosts txt limit brutto dev tags app update play task ok task enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa ok play recap brutto dev ok changed unreachable failed expected results i want enter passphrase one time or never if i use some forwarding actual results every time passphrase prompted six times ,1 1761,6574999395.0,IssuesEvent,2017-09-11 14:44:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Enable configuration of VM metric account,affects_2.3 azure cloud feature_idea waiting_on_maintainer,"ISSUE TYPE Feature Idea COMPONENT NAME http://docs.ansible.com/ansible/azure_module.html ANSIBLE VERSION N/A SUMMARY The ask is to be able to enable setting metrics storage account via the `azure_rm_virtualmachine` task. ",True,"Enable configuration of VM metric account - ISSUE TYPE Feature Idea COMPONENT NAME http://docs.ansible.com/ansible/azure_module.html ANSIBLE VERSION N/A SUMMARY The ask is to be able to enable setting metrics storage account via the `azure_rm_virtualmachine` task. ",1,enable configuration of vm metric account issue type feature idea component name ansible version n a summary the ask is to be able to enable setting metrics storage account via the azure rm virtualmachine task ,1 1836,6577368557.0,IssuesEvent,2017-09-12 00:25:27,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vol delete volume by name,affects_2.1 aws cloud feature_idea waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME cloud/amazon/ec2_vol ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing extra ##### OS / ENVIRONMENT N/A ##### SUMMARY It would be useful if the ec2_vol task was able to either delete volumes by name instead of just by id, OR if the state=list actually was able to use the name to filter the set of returned results, rather than returning all volumes, that then need to get filtered. ##### STEPS TO REPRODUCE ``` - name: Terminate EBS volumes ec2_vol: region: ""{{ ec2_region }}"" name: ""some_name"" state: absent ``` ##### EXPECTED RESULTS EBS volume can be deleted by name. ##### ACTUAL RESULTS ``` <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014 `"" && echo ansible-tmp-1464717304.14-70488874953014=""` echo $HOME/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpRRCEio TO /home/vagrant/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014/ec2_vol <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014/ec2_vol; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014/"" > /dev/null 2>&1 && sleep 0' failed: [localhost] (item=1) => {""failed"": true, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""delete_on_termination"": false, ""device_name"": null, ""ec2_url"": null, ""encrypted"": false, ""id"": null, ""instance"": null, ""iops"": null, ""name"": ""ml-cluster-denali-vol-1"", ""profile"": null, ""region"": ""us-east-1"", ""security_token"": null, ""snapshot"": null, ""state"": ""absent"", ""validate_certs"": true, ""volume_size"": null, ""volume_type"": ""standard"", ""zone"": null}, ""module_name"": ""ec2_vol""}, ""item"": ""1"", ""msg"": ""Value (None) for parameter volume is invalid. Expected: 'vol-...'.""} ``` ",True,"ec2_vol delete volume by name - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME cloud/amazon/ec2_vol ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Nothing extra ##### OS / ENVIRONMENT N/A ##### SUMMARY It would be useful if the ec2_vol task was able to either delete volumes by name instead of just by id, OR if the state=list actually was able to use the name to filter the set of returned results, rather than returning all volumes, that then need to get filtered. ##### STEPS TO REPRODUCE ``` - name: Terminate EBS volumes ec2_vol: region: ""{{ ec2_region }}"" name: ""some_name"" state: absent ``` ##### EXPECTED RESULTS EBS volume can be deleted by name. ##### ACTUAL RESULTS ``` <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014 `"" && echo ansible-tmp-1464717304.14-70488874953014=""` echo $HOME/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpRRCEio TO /home/vagrant/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014/ec2_vol <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014/ec2_vol; rm -rf ""/home/vagrant/.ansible/tmp/ansible-tmp-1464717304.14-70488874953014/"" > /dev/null 2>&1 && sleep 0' failed: [localhost] (item=1) => {""failed"": true, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""delete_on_termination"": false, ""device_name"": null, ""ec2_url"": null, ""encrypted"": false, ""id"": null, ""instance"": null, ""iops"": null, ""name"": ""ml-cluster-denali-vol-1"", ""profile"": null, ""region"": ""us-east-1"", ""security_token"": null, ""snapshot"": null, ""state"": ""absent"", ""validate_certs"": true, ""volume_size"": null, ""volume_type"": ""standard"", ""zone"": null}, ""module_name"": ""ec2_vol""}, ""item"": ""1"", ""msg"": ""Value (None) for parameter volume is invalid. Expected: 'vol-...'.""} ``` ",1, vol delete volume by name issue type feature idea component name cloud amazon vol ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration nothing extra os environment n a summary it would be useful if the vol task was able to either delete volumes by name instead of just by id or if the state list actually was able to use the name to filter the set of returned results rather than returning all volumes that then need to get filtered steps to reproduce name terminate ebs volumes vol region region name some name state absent expected results ebs volume can be deleted by name actual results establish local connection for user vagrant exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmprrceio to home vagrant ansible tmp ansible tmp vol exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp vol rm rf home vagrant ansible tmp ansible tmp dev null sleep failed item failed true invocation module args aws access key null aws secret key null delete on termination false device name null url null encrypted false id null instance null iops null name ml cluster denali vol profile null region us east security token null snapshot null state absent validate certs true volume size null volume type standard zone null module name vol item msg value none for parameter volume is invalid expected vol ,1 1034,4827596907.0,IssuesEvent,2016-11-07 14:07:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cloudformation state: absent requires template or template_url,affects_1.9 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: Bug Report ##### Ansible Version: Ansible 1.9.3, cloudformation.py version 9eb0c178ec2f4e94ae29329fa9694a381157b413 ##### Environment: Mac OS X El Captain ##### Summary: Using the latest (9eb0c178ec2f4e94ae29329fa9694a381157b413) cloudformation module, below **does not** works: ``` yaml - name: Removing RDS instances cloudformation2: stack_name: ""{{ db_stack_name }}"" state: ""absent"" region: ""{{ aws_region }}"" ``` But passing a template or template_url, even if dummy, works: ``` yaml - name: Removing RDS instances cloudformation2: stack_name: ""{{ db_stack_name }}"" state: ""absent"" region: ""{{ aws_region }}"" template_url: ""it's a bug"" ``` ##### Expected Results: According to docs, template/template_url should only be required when state: present. ##### Actual Results: ``` TASK: [Removing RDS instances] ************************************************ failed: [localhost] => {""failed"": true} msg: Either template or template_url expected cloudformation2 [localhost] => { ""msg"": ""Either template or template_url expected"", ""failed"": true } ``` ",True,"cloudformation state: absent requires template or template_url - ##### Issue Type: Bug Report ##### Ansible Version: Ansible 1.9.3, cloudformation.py version 9eb0c178ec2f4e94ae29329fa9694a381157b413 ##### Environment: Mac OS X El Captain ##### Summary: Using the latest (9eb0c178ec2f4e94ae29329fa9694a381157b413) cloudformation module, below **does not** works: ``` yaml - name: Removing RDS instances cloudformation2: stack_name: ""{{ db_stack_name }}"" state: ""absent"" region: ""{{ aws_region }}"" ``` But passing a template or template_url, even if dummy, works: ``` yaml - name: Removing RDS instances cloudformation2: stack_name: ""{{ db_stack_name }}"" state: ""absent"" region: ""{{ aws_region }}"" template_url: ""it's a bug"" ``` ##### Expected Results: According to docs, template/template_url should only be required when state: present. ##### Actual Results: ``` TASK: [Removing RDS instances] ************************************************ failed: [localhost] => {""failed"": true} msg: Either template or template_url expected cloudformation2 [localhost] => { ""msg"": ""Either template or template_url expected"", ""failed"": true } ``` ",1,cloudformation state absent requires template or template url issue type bug report ansible version ansible cloudformation py version environment mac os x el captain summary using the latest cloudformation module below does not works yaml name removing rds instances stack name db stack name state absent region aws region but passing a template or template url even if dummy works yaml name removing rds instances stack name db stack name state absent region aws region template url it s a bug expected results according to docs template template url should only be required when state present actual results task failed failed true msg either template or template url expected msg either template or template url expected failed true ,1 743,4349482374.0,IssuesEvent,2016-07-30 16:03:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,digital_ocean droplet creation with unique_name,bug_report cloud digital_ocean waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME digital_ocean ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT CentOS: 7.1 python: 2.7.5 dopy: 0.3.6 ##### SUMMARY If a DigitalOcean account has more that 20 existing droplets the unique_man=yes option doesn't match any droplets over this limit and creates new droplets as per default. ##### STEPS TO REPRODUCE Create 20+ uniquely named droplets. ``` curl -X GET -H ""Content-Type: application/json"" -H ""Authorization: Bearer 123"" ""https://api.digitalocean.com/v2/droplets {""droplets"":[{""id"": **limited to 20 droplets** ,""tags"":[]}],""links"":{""pages"":{""last"":""https://api.digitalocean.com/v2/droplets?page=2"",""next"":""https://api.digitalocean.com/v2/droplets?page=2""}},""meta"":{""total"":21}} ``` Create a new droplet and re-run the command. ``` - name: ensure unique-droplet-name droplets exists digital_ocean: > state=present command=droplet name=""unique-droplet-name"" unique_name=yes size_id=""512mb"" region_id=""lon1"" image_id=""123"" ssh_key_ids=""123"" api_token=""123"" ``` ##### EXPECTED RESULTS The first request should have created a new droplet as per normal. The second request should have found the existing droplet and skipped the request. ##### ACTUAL RESULTS A new droplet with the same name as the original was created. ##### SUGGESTED FIX Make use of the links and meta section of the API response to request multiple pages. ``` ""https://api.digitalocean.com/v2/droplets {""droplets"":[{""id"": **limited to 20 droplets** ,""tags"":[]}],""links"":{""pages"":{""last"":""https://api.digitalocean.com/v2/droplets?page=2"",""next"":""https://api.digitalocean.com/v2/droplets?page=2""}},""meta"":{""total"":23}} ```",True,"digital_ocean droplet creation with unique_name - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME digital_ocean ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT CentOS: 7.1 python: 2.7.5 dopy: 0.3.6 ##### SUMMARY If a DigitalOcean account has more that 20 existing droplets the unique_man=yes option doesn't match any droplets over this limit and creates new droplets as per default. ##### STEPS TO REPRODUCE Create 20+ uniquely named droplets. ``` curl -X GET -H ""Content-Type: application/json"" -H ""Authorization: Bearer 123"" ""https://api.digitalocean.com/v2/droplets {""droplets"":[{""id"": **limited to 20 droplets** ,""tags"":[]}],""links"":{""pages"":{""last"":""https://api.digitalocean.com/v2/droplets?page=2"",""next"":""https://api.digitalocean.com/v2/droplets?page=2""}},""meta"":{""total"":21}} ``` Create a new droplet and re-run the command. ``` - name: ensure unique-droplet-name droplets exists digital_ocean: > state=present command=droplet name=""unique-droplet-name"" unique_name=yes size_id=""512mb"" region_id=""lon1"" image_id=""123"" ssh_key_ids=""123"" api_token=""123"" ``` ##### EXPECTED RESULTS The first request should have created a new droplet as per normal. The second request should have found the existing droplet and skipped the request. ##### ACTUAL RESULTS A new droplet with the same name as the original was created. ##### SUGGESTED FIX Make use of the links and meta section of the API response to request multiple pages. ``` ""https://api.digitalocean.com/v2/droplets {""droplets"":[{""id"": **limited to 20 droplets** ,""tags"":[]}],""links"":{""pages"":{""last"":""https://api.digitalocean.com/v2/droplets?page=2"",""next"":""https://api.digitalocean.com/v2/droplets?page=2""}},""meta"":{""total"":23}} ```",1,digital ocean droplet creation with unique name issue type bug report component name digital ocean ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default os environment centos python dopy summary if a digitalocean account has more that existing droplets the unique man yes option doesn t match any droplets over this limit and creates new droplets as per default steps to reproduce create uniquely named droplets curl x get h content type application json h authorization bearer droplets id limited to droplets tags links pages last create a new droplet and re run the command name ensure unique droplet name droplets exists digital ocean state present command droplet name unique droplet name unique name yes size id region id image id ssh key ids api token expected results the first request should have created a new droplet as per normal the second request should have found the existing droplet and skipped the request actual results a new droplet with the same name as the original was created suggested fix make use of the links and meta section of the api response to request multiple pages droplets id limited to droplets tags links pages last ,1 1714,6574460855.0,IssuesEvent,2017-09-11 12:58:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Nxos_reboot ends in timeout error when it's successfull ,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_reboot ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] hostfile=localstage #hostfile=mas-b43 ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/masd-rsa host_key_checking=False ##### OS / ENVIRONMENT Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY Switch is rebooted but Ansible errors out. ##### STEPS TO REPRODUCE ``` --- - name: copy configs hosts: - n35-bmc - basebmctemplate - n35-tor - basetortemplate - basetor40gtemplate - n35-agg - baseaggtemplate remote_user: admin gather_facts: no connection: local vars: cli: host: ""{{ ansible_host }}"" transport: cli username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub roles: - copyfirmware roles/copyfirmware/tasks/main.yml --- - nxos_reboot: provider: ""{{ cli }}"" confirm: true host: ""{{ ansible_host }}"" username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub ``` ##### EXPECTED RESULTS reload switch end with a success value. ##### ACTUAL RESULTS ``` TASK [copyfirmware : nxos_reboot] ********************************************** task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/roles/copyfirmware/tasks/main.yml:23 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_reboot.py <10.10.228.60> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.228.60> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `"" && echo ansible-tmp-1478208427.24-135809306234074=""` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `"" ) && sleep 0' <10.10.228.60> PUT /tmp/tmp31WWF5 TO /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py <10.10.228.60> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/ /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py && sleep 0' <10.10.228.60> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/"" > /dev/null 2>&1 && sleep 0' fatal: [rr1-n22-r09-3132hl-3-1d]: FAILED! => { ""changed"": false, ""error"": ""timeout trying to send command: reload\r"", ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""config"": null, ""confirm"": true, ""host"": ""10.10.228.60"", ""include_defaults"": ""False"", ""password"": null, ""port"": null, ""provider"": { ""host"": ""10.10.228.60"", ""ssh_keyfile"": ""/srv/tftpboot/my-rsa.pub"", ""transport"": ""cli"", ""username"": ""admin"" }, ""save"": false, ""ssh_keyfile"": ""/srv/tftpboot/my-rsa.pub"", ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true }, ""module_name"": ""nxos_reboot"" }, ""msg"": ""Error sending ['reload']"" } to retry, use: --limit @/home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/nexusbaseconfig.retry ``` ",True,"Nxos_reboot ends in timeout error when it's successfull - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos_reboot ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION [defaults] hostfile=localstage #hostfile=mas-b43 ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/masd-rsa host_key_checking=False ##### OS / ENVIRONMENT Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY Switch is rebooted but Ansible errors out. ##### STEPS TO REPRODUCE ``` --- - name: copy configs hosts: - n35-bmc - basebmctemplate - n35-tor - basetortemplate - basetor40gtemplate - n35-agg - baseaggtemplate remote_user: admin gather_facts: no connection: local vars: cli: host: ""{{ ansible_host }}"" transport: cli username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub roles: - copyfirmware roles/copyfirmware/tasks/main.yml --- - nxos_reboot: provider: ""{{ cli }}"" confirm: true host: ""{{ ansible_host }}"" username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub ``` ##### EXPECTED RESULTS reload switch end with a success value. ##### ACTUAL RESULTS ``` TASK [copyfirmware : nxos_reboot] ********************************************** task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/roles/copyfirmware/tasks/main.yml:23 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_reboot.py <10.10.228.60> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.228.60> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `"" && echo ansible-tmp-1478208427.24-135809306234074=""` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `"" ) && sleep 0' <10.10.228.60> PUT /tmp/tmp31WWF5 TO /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py <10.10.228.60> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/ /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py && sleep 0' <10.10.228.60> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py; rm -rf ""/home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/"" > /dev/null 2>&1 && sleep 0' fatal: [rr1-n22-r09-3132hl-3-1d]: FAILED! => { ""changed"": false, ""error"": ""timeout trying to send command: reload\r"", ""failed"": true, ""invocation"": { ""module_args"": { ""auth_pass"": null, ""authorize"": false, ""config"": null, ""confirm"": true, ""host"": ""10.10.228.60"", ""include_defaults"": ""False"", ""password"": null, ""port"": null, ""provider"": { ""host"": ""10.10.228.60"", ""ssh_keyfile"": ""/srv/tftpboot/my-rsa.pub"", ""transport"": ""cli"", ""username"": ""admin"" }, ""save"": false, ""ssh_keyfile"": ""/srv/tftpboot/my-rsa.pub"", ""timeout"": 10, ""transport"": ""cli"", ""use_ssl"": false, ""username"": ""admin"", ""validate_certs"": true }, ""module_name"": ""nxos_reboot"" }, ""msg"": ""Error sending ['reload']"" } to retry, use: --limit @/home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/nexusbaseconfig.retry ``` ",1,nxos reboot ends in timeout error when it s successfull issue type bug report component name nxos reboot ansible version ansible config file home emarq solutions network automation mas ansible cisco nexus ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables hostfile localstage hostfile mas ansible ssh user admin ansible ssh private key file home emarq ssh masd rsa host key checking false os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux generic ubuntu smp wed oct utc gnu linux summary switch is rebooted but ansible errors out steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name copy configs hosts bmc basebmctemplate tor basetortemplate agg baseaggtemplate remote user admin gather facts no connection local vars cli host ansible host transport cli username admin ssh keyfile srv tftpboot my rsa pub roles copyfirmware roles copyfirmware tasks main yml nxos reboot provider cli confirm true host ansible host username admin ssh keyfile srv tftpboot my rsa pub expected results reload switch end with a success value actual results task task path home emarq solutions network automation mas ansible cisco nexus roles copyfirmware tasks main yml using module file usr lib dist packages ansible modules core network nxos nxos reboot py establish local connection for user emarq exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home emarq ansible tmp ansible tmp nxos reboot py exec bin sh c chmod u x home emarq ansible tmp ansible tmp home emarq ansible tmp ansible tmp nxos reboot py sleep exec bin sh c usr bin python home emarq ansible tmp ansible tmp nxos reboot py rm rf home emarq ansible tmp ansible tmp dev null sleep fatal failed changed false error timeout trying to send command reload r failed true invocation module args auth pass null authorize false config null confirm true host include defaults false password null port null provider host ssh keyfile srv tftpboot my rsa pub transport cli username admin save false ssh keyfile srv tftpboot my rsa pub timeout transport cli use ssl false username admin validate certs true module name nxos reboot msg error sending to retry use limit home emarq solutions network automation mas ansible cisco nexus nexusbaseconfig retry ,1 1756,6574983455.0,IssuesEvent,2017-09-11 14:41:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Support for Logon to Amazon EC2 Container Registry,affects_2.1 cloud docker feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_login ##### ANSIBLE VERSION ``` 2.1.1.0 ``` ##### SUMMARY As far as I know, the only way to logon to (and pull docker images from / push to) [Amazon ECR](https://aws.amazon.com/de/ecr/) is via the shell module. It would be nice if the docker_login module supports logon to ECR so that further ansible docker tasks can directly work with ECR. This is how I handle this currently: ``` - name: ECR login shell: ""$(aws ecr get-login --region eu-central-1)"" - name: Pull image from ECR shell: ""docker pull myid.dkr.ecr.eu-central-1.amazonaws.com/my-app:latest"" ``` The output from `aws ecr get-login` is: ``` docker login -u AWS -p [VERY-LONG-PASSWORD] -e none https://myid.dkr.ecr.eu-central-1.amazonaws.com ``` The generated password is valid for 12 hours. Of course, I could use awk to get the password from the output and then use docker_login. But the output format may change... ",True,"Support for Logon to Amazon EC2 Container Registry - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_login ##### ANSIBLE VERSION ``` 2.1.1.0 ``` ##### SUMMARY As far as I know, the only way to logon to (and pull docker images from / push to) [Amazon ECR](https://aws.amazon.com/de/ecr/) is via the shell module. It would be nice if the docker_login module supports logon to ECR so that further ansible docker tasks can directly work with ECR. This is how I handle this currently: ``` - name: ECR login shell: ""$(aws ecr get-login --region eu-central-1)"" - name: Pull image from ECR shell: ""docker pull myid.dkr.ecr.eu-central-1.amazonaws.com/my-app:latest"" ``` The output from `aws ecr get-login` is: ``` docker login -u AWS -p [VERY-LONG-PASSWORD] -e none https://myid.dkr.ecr.eu-central-1.amazonaws.com ``` The generated password is valid for 12 hours. Of course, I could use awk to get the password from the output and then use docker_login. But the output format may change... ",1,support for logon to amazon container registry issue type feature idea component name docker login ansible version summary as far as i know the only way to logon to and pull docker images from push to is via the shell module it would be nice if the docker login module supports logon to ecr so that further ansible docker tasks can directly work with ecr this is how i handle this currently name ecr login shell aws ecr get login region eu central name pull image from ecr shell docker pull myid dkr ecr eu central amazonaws com my app latest the output from aws ecr get login is docker login u aws p e none the generated password is valid for hours of course i could use awk to get the password from the output and then use docker login but the output format may change ,1 1661,6574048167.0,IssuesEvent,2017-09-11 11:14:50,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_config: multiline ip sla does not correctly handle escaped URLs,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT Ubuntu 16.04 managing Cisco 2901 Router ##### SUMMARY Trying to script pushing a multi-line ""ip sla"" stanza and it appears to not handle escaped strings (i.e. URLs) correctly. ##### STEPS TO REPRODUCE Desired IOS config (FQDN has been changed): ``` ip sla 1000 http raw http://www.example.com/data/lib/10k.txt http-raw-request GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n \r\n exit ``` Using this playbook task: ``` - name: CREATE WORKING IP SLA ios_config: provider: ""{{ provider }}"" authorize: yes lines: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" - ""GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n"" - ""\r\n"" - exit parents: ip sla 1000 after: ""ip sla schedule 1000 life forever start-time now"" ``` ##### EXPECTED RESULTS I can successfully push the following generic ""ip sla http get"" example: ``` ip sla 1000 http get http://www.ibm.com/data/lib/10k.txt ip sla schedule 1000 start-time now ``` ##### ACTUAL RESULTS From failed multi-line ""ip sla"" example: ``` root@playground:/etc/ansible/net-eng# ansible-playbook -vvvv ip_sla.yaml Using /etc/ansible/ansible.cfg as config file ERROR! Syntax Error while loading YAML. The error appears to have been in '/etc/ansible/net-eng/ip_sla.yaml': line 42, column 1, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" ^ here ``` ",True,"ios_config: multiline ip sla does not correctly handle escaped URLs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT Ubuntu 16.04 managing Cisco 2901 Router ##### SUMMARY Trying to script pushing a multi-line ""ip sla"" stanza and it appears to not handle escaped strings (i.e. URLs) correctly. ##### STEPS TO REPRODUCE Desired IOS config (FQDN has been changed): ``` ip sla 1000 http raw http://www.example.com/data/lib/10k.txt http-raw-request GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n \r\n exit ``` Using this playbook task: ``` - name: CREATE WORKING IP SLA ios_config: provider: ""{{ provider }}"" authorize: yes lines: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" - ""GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n"" - ""\r\n"" - exit parents: ip sla 1000 after: ""ip sla schedule 1000 life forever start-time now"" ``` ##### EXPECTED RESULTS I can successfully push the following generic ""ip sla http get"" example: ``` ip sla 1000 http get http://www.ibm.com/data/lib/10k.txt ip sla schedule 1000 start-time now ``` ##### ACTUAL RESULTS From failed multi-line ""ip sla"" example: ``` root@playground:/etc/ansible/net-eng# ansible-playbook -vvvv ip_sla.yaml Using /etc/ansible/ansible.cfg as config file ERROR! Syntax Error while loading YAML. The error appears to have been in '/etc/ansible/net-eng/ip_sla.yaml': line 42, column 1, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" ^ here ``` ",1,ios config multiline ip sla does not correctly handle escaped urls issue type bug report component name ios config ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none os environment ubuntu managing cisco router summary trying to script pushing a multi line ip sla stanza and it appears to not handle escaped strings i e urls correctly steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used desired ios config fqdn has been changed ip sla http raw http raw request get http r n r n exit using this playbook task name create working ip sla ios config provider provider authorize yes lines http raw http raw request get http r n r n exit parents ip sla after ip sla schedule life forever start time now expected results i can successfully push the following generic ip sla http get example ip sla http get ip sla schedule start time now actual results from failed multi line ip sla example root playground etc ansible net eng ansible playbook vvvv ip sla yaml using etc ansible ansible cfg as config file error syntax error while loading yaml the error appears to have been in etc ansible net eng ip sla yaml line column but may be elsewhere in the file depending on the exact syntax problem the offending line appears to be http raw http raw request here ,1 1739,6574877277.0,IssuesEvent,2017-09-11 14:22:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container networks not working as expected (viz. mac addresses),affects_2.1 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY [Relevant Docker docs](https://docs.docker.com/engine/reference/run/#/network-settings). Should be able to set mac address for the connected networks (I have my container connected to two networks, but need its mac address set in one of them. Using the `mac_address` configuration option only sets the address in the default bridge network, not in my custom network. ##### STEPS TO REPRODUCE Try to set mac address on container, fail to do so. ``` - name: start container become: false docker_container: name: blah state: started restart_policy: always image: ""{{ blah.image }}"" env: ""{{ blah.environment }}"" log_opt: ""{{ log_opts }}"" volumes: ""{{ blah.volumes }}"" ports: ""{{ blah.ports }}"" mac_address: ""{{ mac_address.stdout }}"" networks: - name: ""{{ custom_network_name }}"" ipv6_address: ""{{ blah_ipv6_address }}"" driver: bridge ``` ##### EXPECTED RESULTS I would expect the custom network to have the mac address specified. ##### ACTUAL RESULTS Container gets two networks, one being the default bridge, the other being the custom network. Only the former gets the custom mac address. ",True,"docker_container networks not working as expected (viz. mac addresses) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY [Relevant Docker docs](https://docs.docker.com/engine/reference/run/#/network-settings). Should be able to set mac address for the connected networks (I have my container connected to two networks, but need its mac address set in one of them. Using the `mac_address` configuration option only sets the address in the default bridge network, not in my custom network. ##### STEPS TO REPRODUCE Try to set mac address on container, fail to do so. ``` - name: start container become: false docker_container: name: blah state: started restart_policy: always image: ""{{ blah.image }}"" env: ""{{ blah.environment }}"" log_opt: ""{{ log_opts }}"" volumes: ""{{ blah.volumes }}"" ports: ""{{ blah.ports }}"" mac_address: ""{{ mac_address.stdout }}"" networks: - name: ""{{ custom_network_name }}"" ipv6_address: ""{{ blah_ipv6_address }}"" driver: bridge ``` ##### EXPECTED RESULTS I would expect the custom network to have the mac address specified. ##### ACTUAL RESULTS Container gets two networks, one being the default bridge, the other being the custom network. Only the former gets the custom mac address. ",1,docker container networks not working as expected viz mac addresses issue type bug report component name docker container ansible version ansible config file configured module search path default w o overrides configuration n a os environment n a summary should be able to set mac address for the connected networks i have my container connected to two networks but need its mac address set in one of them using the mac address configuration option only sets the address in the default bridge network not in my custom network steps to reproduce try to set mac address on container fail to do so name start container become false docker container name blah state started restart policy always image blah image env blah environment log opt log opts volumes blah volumes ports blah ports mac address mac address stdout networks name custom network name address blah address driver bridge expected results i would expect the custom network to have the mac address specified actual results container gets two networks one being the default bridge the other being the custom network only the former gets the custom mac address ,1 1734,6574851466.0,IssuesEvent,2017-09-11 14:17:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Modifying VM with vsphere_guest failing with current_devices is not defined.,affects_2.1 bug_report cloud vmware waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` [core@bhudcent7 coreos]$ ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides [core@bhudcent7 coreos] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY When trying to perform reconfigure of exiting vm failing with global name ' current_devices' is not defined ##### STEPS TO REPRODUCE ``` --- - name: Create mesos VMS hosts: localhost connection: local tasks: - name: modify master vms vsphere_guest: vcenter_hostname: IP username: USERNAME password: PW guest: ""{{ item }}"" state: reconfigured vm_extra_config: vcpu.hotadd: yes mem.hotadd: yes notes: Mesos Master folder: Mesos vm_hardware: memory_mb: 16384 num_cpus: 2 osid: centos64Guest scsi: paravirtual vm_cdrom: type: ""iso"" iso_path: ""CENTOS2/ISO/mesos/configdrive-{{ item }}.iso"" esxi: datacenter: HomeLab hostname: 192.168.1.21 with_items: - mesosm01 - mesosm02 - mesosm03 ``` ##### EXPECTED RESULTS Modify vm ram/cpu and ISO attached to VM ##### ACTUAL RESULTS 496600.14-13851980734504/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py"", line 1929, in main() File ""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py"", line 1856, in main force=force File ""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py"", line 924, in reconfigure_vm for dev in current_devices: NameError: global name 'current_devices' is not defined failed: [localhost](item=mesosm03) => {""failed"": true, ""invocation"": {""module_name"": ""vsphere_guest""}, ""item"": ""mesosm03"", ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\"", line 1929, in \n main()\n File \""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\"", line 1856, in main\n force=force\n File \""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\"", line 924, in reconfigure_vm\n for dev in current_devices:\nNameError: global name 'current_devices' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ",True,"Modifying VM with vsphere_guest failing with current_devices is not defined. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` [core@bhudcent7 coreos]$ ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides [core@bhudcent7 coreos] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY When trying to perform reconfigure of exiting vm failing with global name ' current_devices' is not defined ##### STEPS TO REPRODUCE ``` --- - name: Create mesos VMS hosts: localhost connection: local tasks: - name: modify master vms vsphere_guest: vcenter_hostname: IP username: USERNAME password: PW guest: ""{{ item }}"" state: reconfigured vm_extra_config: vcpu.hotadd: yes mem.hotadd: yes notes: Mesos Master folder: Mesos vm_hardware: memory_mb: 16384 num_cpus: 2 osid: centos64Guest scsi: paravirtual vm_cdrom: type: ""iso"" iso_path: ""CENTOS2/ISO/mesos/configdrive-{{ item }}.iso"" esxi: datacenter: HomeLab hostname: 192.168.1.21 with_items: - mesosm01 - mesosm02 - mesosm03 ``` ##### EXPECTED RESULTS Modify vm ram/cpu and ISO attached to VM ##### ACTUAL RESULTS 496600.14-13851980734504/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py"", line 1929, in main() File ""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py"", line 1856, in main force=force File ""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py"", line 924, in reconfigure_vm for dev in current_devices: NameError: global name 'current_devices' is not defined failed: [localhost](item=mesosm03) => {""failed"": true, ""invocation"": {""module_name"": ""vsphere_guest""}, ""item"": ""mesosm03"", ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\"", line 1929, in \n main()\n File \""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\"", line 1856, in main\n force=force\n File \""/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\"", line 924, in reconfigure_vm\n for dev in current_devices:\nNameError: global name 'current_devices' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ",1,modifying vm with vsphere guest failing with current devices is not defined issue type bug report component name vsphere guest ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration only mod i have done is to disable host key checking uncomment this to disable ssh key host checking host key checking false os environment centos summary when trying to perform reconfigure of exiting vm failing with global name current devices is not defined steps to reproduce clone template and try and modify cdrom iso image name create mesos vms hosts localhost connection local tasks name modify master vms vsphere guest vcenter hostname ip username username password pw guest item state reconfigured vm extra config vcpu hotadd yes mem hotadd yes notes mesos master folder mesos vm hardware memory mb num cpus osid scsi paravirtual vm cdrom type iso iso path iso mesos configdrive item iso esxi datacenter homelab hostname with items expected results modify vm ram cpu and iso attached to vm actual results dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible gzrbrs ansible module vsphere guest py line in main file tmp ansible gzrbrs ansible module vsphere guest py line in main force force file tmp ansible gzrbrs ansible module vsphere guest py line in reconfigure vm for dev in current devices nameerror global name current devices is not defined failed item failed true invocation module name vsphere guest item module stderr traceback most recent call last n file tmp ansible gzrbrs ansible module vsphere guest py line in n main n file tmp ansible gzrbrs ansible module vsphere guest py line in main n force force n file tmp ansible gzrbrs ansible module vsphere guest py line in reconfigure vm n for dev in current devices nnameerror global name current devices is not defined n module stdout msg module failure parsed false ,1 1847,6577385522.0,IssuesEvent,2017-09-12 00:32:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_asg module sporadically fails to get async notification,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from: Ubuntu Linux Managing: Ubuntu Linux ##### SUMMARY We (seeming randomly) get cases where we'll do a blue/green deploy using the ""ec2_asg"" module and have an async task waiting for the module to return a result. The task that waits for the result never gets the async notification and therefore fails despite the deploy succeeding. ##### STEPS TO REPRODUCE - Create a new launch config (our new ""blue"" deploy) - Run the ""ec2_asg"" task with the new launch config (with async set and ""poll: 0"") - Have a task later in the playbook waiting on the result - Confirm that the deploy succeeds in AWS (new instances brought up, old ones terminated) - See that the ""async_status"" job never gets the notification that the deploy has happened ``` - name: Create integration tier launch configuration ec2_lc: name: ""{{ environ }}-int-launch-config-{{ current_time }}"" [OMITTED FOR BREVITY] register: int_launch_configuration - name: Create Integration Autoscaling group ec2_asg: name: ""{{ environ }}-int-asg"" launch_config_name: ""{{ environ }}-int-launch-config-{{ current_time }}"" vpc_zone_identifier: ""{{ int_subnets }}"" health_check_type: ""ELB"" health_check_period: 400 termination_policies: ""OldestInstance"" replace_all_instances: yes wait_timeout: 2400 replace_batch_size: ""{{ int_replace_batch_size }}"" async: 1000 poll: 0 register: int_asg_sleeper - name: 'int ASG - check on fire and forget task' async_status: jid={{ int_asg_sleeper.ansible_job_id }} register: int_asg_job_result until: int_asg_job_result.finished retries: 60 delay: 15 ``` ##### EXPECTED RESULTS Expected that when the deploy succeeds and the ""old"" instances are terminated, the Async job gets the message and reports success. ##### ACTUAL RESULTS It appears that the ""file"" mechanism which Python/Ansible use for checking on the status of background jobs fails and the file is never populated, despite the job finishing. Therefore the job polling the file times out eventually. ``` 08:03:34.063 TASK [launch-config : int ASG - check on fire and forget task] ***************** 08:03:34.130 fatal: [localhost]: FAILED! => {""failed"": true, ""msg"": ""ERROR! The conditional check 'int_asg_job_result.finished' failed. The error was: ERROR! error while evaluating conditional (int_asg_job_result.finished): ERROR! 'dict object' has no attribute 'finished'""} ``` ",True,"ec2_asg module sporadically fails to get async notification - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from: Ubuntu Linux Managing: Ubuntu Linux ##### SUMMARY We (seeming randomly) get cases where we'll do a blue/green deploy using the ""ec2_asg"" module and have an async task waiting for the module to return a result. The task that waits for the result never gets the async notification and therefore fails despite the deploy succeeding. ##### STEPS TO REPRODUCE - Create a new launch config (our new ""blue"" deploy) - Run the ""ec2_asg"" task with the new launch config (with async set and ""poll: 0"") - Have a task later in the playbook waiting on the result - Confirm that the deploy succeeds in AWS (new instances brought up, old ones terminated) - See that the ""async_status"" job never gets the notification that the deploy has happened ``` - name: Create integration tier launch configuration ec2_lc: name: ""{{ environ }}-int-launch-config-{{ current_time }}"" [OMITTED FOR BREVITY] register: int_launch_configuration - name: Create Integration Autoscaling group ec2_asg: name: ""{{ environ }}-int-asg"" launch_config_name: ""{{ environ }}-int-launch-config-{{ current_time }}"" vpc_zone_identifier: ""{{ int_subnets }}"" health_check_type: ""ELB"" health_check_period: 400 termination_policies: ""OldestInstance"" replace_all_instances: yes wait_timeout: 2400 replace_batch_size: ""{{ int_replace_batch_size }}"" async: 1000 poll: 0 register: int_asg_sleeper - name: 'int ASG - check on fire and forget task' async_status: jid={{ int_asg_sleeper.ansible_job_id }} register: int_asg_job_result until: int_asg_job_result.finished retries: 60 delay: 15 ``` ##### EXPECTED RESULTS Expected that when the deploy succeeds and the ""old"" instances are terminated, the Async job gets the message and reports success. ##### ACTUAL RESULTS It appears that the ""file"" mechanism which Python/Ansible use for checking on the status of background jobs fails and the file is never populated, despite the job finishing. Therefore the job polling the file times out eventually. ``` 08:03:34.063 TASK [launch-config : int ASG - check on fire and forget task] ***************** 08:03:34.130 fatal: [localhost]: FAILED! => {""failed"": true, ""msg"": ""ERROR! The conditional check 'int_asg_job_result.finished' failed. The error was: ERROR! error while evaluating conditional (int_asg_job_result.finished): ERROR! 'dict object' has no attribute 'finished'""} ``` ",1, asg module sporadically fails to get async notification issue type bug report component name asg ansible version ansible configuration os environment running from ubuntu linux managing ubuntu linux summary we seeming randomly get cases where we ll do a blue green deploy using the asg module and have an async task waiting for the module to return a result the task that waits for the result never gets the async notification and therefore fails despite the deploy succeeding steps to reproduce create a new launch config our new blue deploy run the asg task with the new launch config with async set and poll have a task later in the playbook waiting on the result confirm that the deploy succeeds in aws new instances brought up old ones terminated see that the async status job never gets the notification that the deploy has happened name create integration tier launch configuration lc name environ int launch config current time register int launch configuration name create integration autoscaling group asg name environ int asg launch config name environ int launch config current time vpc zone identifier int subnets health check type elb health check period termination policies oldestinstance replace all instances yes wait timeout replace batch size int replace batch size async poll register int asg sleeper name int asg check on fire and forget task async status jid int asg sleeper ansible job id register int asg job result until int asg job result finished retries delay expected results expected that when the deploy succeeds and the old instances are terminated the async job gets the message and reports success actual results it appears that the file mechanism which python ansible use for checking on the status of background jobs fails and the file is never populated despite the job finishing therefore the job polling the file times out eventually task fatal failed failed true msg error the conditional check int asg job result finished failed the error was error error while evaluating conditional int asg job result finished error dict object has no attribute finished ,1 1811,6576170361.0,IssuesEvent,2017-09-11 18:47:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,shell module run python script exception on ansible 2.1.1.0,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION roles_path = ./roles callback_plugins = ./plugins/callback_plugins lookup_plugins = ./plugins/lookup_plugins inventory = inventory gathering = smart ##### OS / ENVIRONMENT Mac OS EI 10.11.3 ##### SUMMARY The script works on ansible 1.9.1 while 2.1.1.0 doesn't work. I've no idea about this? ##### STEPS TO REPRODUCE ansible-playbook playbooks/sample.yml ``` - name: Download AWS Security Group hosts: localhost connection: local gather_facts: False tasks: - shell: ../roles/script/files/test.py > /tmp/test.yml ``` the ../roles/script/files/test.py has below information # !/usr/bin/env python import os print os.path ##### EXPECTED RESULTS The python script should work in 2.1.1.0 ##### ACTUAL RESULTS It's failed. ``` TASK [command] ***************************************************************** task path: /Users/Test/production/ansible/playbooks/security-group.yml:6 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: HenryWen <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451 `"" && echo ansible-tmp-1473759673.83-249336867857451=""` echo $HOME/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/st/7gf56xv52jn97pthscx8sdw00000gn/T/tmp35KP0I TO /Users/Test/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451/command <127.0.0.1> EXEC /bin/sh -c 'LANG=zh_CN.UTF-8 LC_ALL=zh_CN.UTF-8 LC_MESSAGES=zh_CN.UTF-8 /usr/bin/python /Users/Test/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451/command; rm -rf ""/Users/Test/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": true, ""cmd"": ""../roles/script/files/test.py > /tmp/security_groups.yml"", ""delta"": ""0:00:00.004941"", ""end"": ""2016-09-13 17:41:13.979999"", ""failed"": true, ""invocation"": {""module_args"": {""_raw_params"": ""../roles/script/files/test.py > /tmp/test.yml"", ""_uses_shell"": true, ""chdir"": null, ""creates"": null, ""executable"": null, ""removes"": null, ""warn"": true}, ""module_name"": ""command""}, ""rc"": 127, ""start"": ""2016-09-13 17:41:13.975058"", ""stderr"": ""/bin/sh: ../roles/script/files/test .py: No such file or directory"", ""stdout"": """", ""stdout_lines"": [], ""warnings"": []} ``` ",True,"shell module run python script exception on ansible 2.1.1.0 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell module ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION roles_path = ./roles callback_plugins = ./plugins/callback_plugins lookup_plugins = ./plugins/lookup_plugins inventory = inventory gathering = smart ##### OS / ENVIRONMENT Mac OS EI 10.11.3 ##### SUMMARY The script works on ansible 1.9.1 while 2.1.1.0 doesn't work. I've no idea about this? ##### STEPS TO REPRODUCE ansible-playbook playbooks/sample.yml ``` - name: Download AWS Security Group hosts: localhost connection: local gather_facts: False tasks: - shell: ../roles/script/files/test.py > /tmp/test.yml ``` the ../roles/script/files/test.py has below information # !/usr/bin/env python import os print os.path ##### EXPECTED RESULTS The python script should work in 2.1.1.0 ##### ACTUAL RESULTS It's failed. ``` TASK [command] ***************************************************************** task path: /Users/Test/production/ansible/playbooks/security-group.yml:6 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: HenryWen <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451 `"" && echo ansible-tmp-1473759673.83-249336867857451=""` echo $HOME/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451 `"" ) && sleep 0' <127.0.0.1> PUT /var/folders/st/7gf56xv52jn97pthscx8sdw00000gn/T/tmp35KP0I TO /Users/Test/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451/command <127.0.0.1> EXEC /bin/sh -c 'LANG=zh_CN.UTF-8 LC_ALL=zh_CN.UTF-8 LC_MESSAGES=zh_CN.UTF-8 /usr/bin/python /Users/Test/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451/command; rm -rf ""/Users/Test/.ansible/tmp/ansible-tmp-1473759673.83-249336867857451/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => {""changed"": true, ""cmd"": ""../roles/script/files/test.py > /tmp/security_groups.yml"", ""delta"": ""0:00:00.004941"", ""end"": ""2016-09-13 17:41:13.979999"", ""failed"": true, ""invocation"": {""module_args"": {""_raw_params"": ""../roles/script/files/test.py > /tmp/test.yml"", ""_uses_shell"": true, ""chdir"": null, ""creates"": null, ""executable"": null, ""removes"": null, ""warn"": true}, ""module_name"": ""command""}, ""rc"": 127, ""start"": ""2016-09-13 17:41:13.975058"", ""stderr"": ""/bin/sh: ../roles/script/files/test .py: No such file or directory"", ""stdout"": """", ""stdout_lines"": [], ""warnings"": []} ``` ",1,shell module run python script exception on ansible issue type bug report component name shell module ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables roles path roles callback plugins plugins callback plugins lookup plugins plugins lookup plugins inventory inventory gathering smart os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific mac os ei summary the script works on ansible while doesn t work i ve no idea about this steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible playbook playbooks sample yml name download aws security group hosts localhost connection local gather facts false tasks shell roles script files test py tmp test yml the roles script files test py has below information usr bin env python import os print os path expected results the python script should work in actual results it s failed task task path users test production ansible playbooks security group yml establish local connection for user henrywen exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders st t to users test ansible tmp ansible tmp command exec bin sh c lang zh cn utf lc all zh cn utf lc messages zh cn utf usr bin python users test ansible tmp ansible tmp command rm rf users test ansible tmp ansible tmp dev null sleep fatal failed changed true cmd roles script files test py tmp security groups yml delta end failed true invocation module args raw params roles script files test py tmp test yml uses shell true chdir null creates null executable null removes null warn true module name command rc start stderr bin sh roles script files test py no such file or directory stdout stdout lines warnings ,1 1662,6574059238.0,IssuesEvent,2017-09-11 11:17:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_ami module broken in Ansible 2.2,affects_2.2 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME component: ec2_ami ##### ANSIBLE VERSION ``` $ ansible --version /usr/lib64/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6 DeprecationWarning ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.7 (Santiago) $ sudo pip list | grep -i boto DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6 boto (2.43.0) boto3 (1.4.2) botocore (1.4.81) ##### SUMMARY ec2_ami module crashes and fails to save AMI ##### STEPS TO REPRODUCE Set up an instance and try to save it using ec2_ami module ``` - name: Save the app server as an AMI ec2_ami: description: ""App server - created {{ ansible_date_time.iso8601 }}"" instance_id: ""{{ item.id }}"" name: ""Appserver.{{ ansible_date_time.date }}.{{ ansible_date_time.hour }}.{{ ansible_date_time.minute }}.{{ ansible_date_time.second }} {{ ansible_date_time.tz_offset }} {{ scm_branch.stdout }}"" region: ""{{ item.region }}"" wait: yes launch_permissions: user_ids: [""{{ aws_account_number_prod }}""] tags: git_branch: ""{{ scm_branch.stdout }}"" with_items: ""{{ ec2.instances }}"" register: saved_ami ``` ##### EXPECTED RESULTS For it to not crash. ##### ACTUAL RESULTS ``` TASK [save_ami : Save the app server as an AMI] ****************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Image' object has no attribute 'creationDate' failed: [127.0.0.1] (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-10-0-140.eu-central-1.compute.internal', u'public_ip': u'xx.xx.xx.xx', u'private_ip': u'10.10.0.140', u'id': u'i-718cb3cd', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sdb': {u'status': u'attached', u'delete_on_termination': False, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sdc': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}}, u'key_name': u'xxxxxxxx', u'image_id': u'ami-xxxxxxxx', u'tenancy': u'default', u'groups': {u'sg-xxxxxxx': u'image_build'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'image_build xxxxxxxx'}, u'placement': u'eu-central-1b', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'eu-central-1', u'launch_time': u'2016-12-02T01:03:35.000Z', u'instance_type': u't2.small', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {""failed"": true, ""item"": {""ami_launch_index"": ""0"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""status"": ""attached"", ""volume_id"": ""vol-xxxxxxxx""}, ""/dev/sdb"": {""delete_on_termination"": false, ""status"": ""attached"", ""volume_id"": ""vol-xxxxxxxx""}, ""/dev/sdc"": {""delete_on_termination"": true, ""status"": ""attached"", ""volume_id"": ""vol-xxxxxxxx""}}, ""dns_name"": """", ""ebs_optimized"": false, ""groups"": {""sg-xxxxxxxx"": ""image_build""}, ""hypervisor"": ""xen"", ""id"": ""i-718cb3cd"", ""image_id"": ""ami-xxxxxxxx"", ""instance_type"": ""t2.small"", ""kernel"": null, ""key_name"": ""xxxxxxxx"", ""launch_time"": ""2016-12-02T01:03:35.000Z"", ""placement"": ""eu-central-1b"", ""private_dns_name"": ""ip-10-10-0-140.eu-central-1.compute.internal"", ""private_ip"": ""10.10.0.140"", ""public_dns_name"": """", ""public_ip"": ""xx.xx.xx.xx"", ""ramdisk"": null, ""region"": ""eu-central-1"", ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""running"", ""state_code"": 16, ""tags"": {""Name"": ""image_build""}, ""tenancy"": ""default"", ""virtualization_type"": ""hvm""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 560, in \n main()\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 552, in main\n create_image(module, ec2)\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 419, in create_image\n module.exit_json(msg=\""AMI creation operation complete\"", changed=True, **get_ami_info(img))\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 332, in get_ami_info\n creationDate=image.creationDate,\nAttributeError: 'Image' object has no attribute 'creationDate'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",True,"ec2_ami module broken in Ansible 2.2 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME component: ec2_ami ##### ANSIBLE VERSION ``` $ ansible --version /usr/lib64/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6 DeprecationWarning ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.7 (Santiago) $ sudo pip list | grep -i boto DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6 boto (2.43.0) boto3 (1.4.2) botocore (1.4.81) ##### SUMMARY ec2_ami module crashes and fails to save AMI ##### STEPS TO REPRODUCE Set up an instance and try to save it using ec2_ami module ``` - name: Save the app server as an AMI ec2_ami: description: ""App server - created {{ ansible_date_time.iso8601 }}"" instance_id: ""{{ item.id }}"" name: ""Appserver.{{ ansible_date_time.date }}.{{ ansible_date_time.hour }}.{{ ansible_date_time.minute }}.{{ ansible_date_time.second }} {{ ansible_date_time.tz_offset }} {{ scm_branch.stdout }}"" region: ""{{ item.region }}"" wait: yes launch_permissions: user_ids: [""{{ aws_account_number_prod }}""] tags: git_branch: ""{{ scm_branch.stdout }}"" with_items: ""{{ ec2.instances }}"" register: saved_ami ``` ##### EXPECTED RESULTS For it to not crash. ##### ACTUAL RESULTS ``` TASK [save_ami : Save the app server as an AMI] ****************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Image' object has no attribute 'creationDate' failed: [127.0.0.1] (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-10-0-140.eu-central-1.compute.internal', u'public_ip': u'xx.xx.xx.xx', u'private_ip': u'10.10.0.140', u'id': u'i-718cb3cd', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sdb': {u'status': u'attached', u'delete_on_termination': False, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sdc': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}}, u'key_name': u'xxxxxxxx', u'image_id': u'ami-xxxxxxxx', u'tenancy': u'default', u'groups': {u'sg-xxxxxxx': u'image_build'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'image_build xxxxxxxx'}, u'placement': u'eu-central-1b', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'eu-central-1', u'launch_time': u'2016-12-02T01:03:35.000Z', u'instance_type': u't2.small', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {""failed"": true, ""item"": {""ami_launch_index"": ""0"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""status"": ""attached"", ""volume_id"": ""vol-xxxxxxxx""}, ""/dev/sdb"": {""delete_on_termination"": false, ""status"": ""attached"", ""volume_id"": ""vol-xxxxxxxx""}, ""/dev/sdc"": {""delete_on_termination"": true, ""status"": ""attached"", ""volume_id"": ""vol-xxxxxxxx""}}, ""dns_name"": """", ""ebs_optimized"": false, ""groups"": {""sg-xxxxxxxx"": ""image_build""}, ""hypervisor"": ""xen"", ""id"": ""i-718cb3cd"", ""image_id"": ""ami-xxxxxxxx"", ""instance_type"": ""t2.small"", ""kernel"": null, ""key_name"": ""xxxxxxxx"", ""launch_time"": ""2016-12-02T01:03:35.000Z"", ""placement"": ""eu-central-1b"", ""private_dns_name"": ""ip-10-10-0-140.eu-central-1.compute.internal"", ""private_ip"": ""10.10.0.140"", ""public_dns_name"": """", ""public_ip"": ""xx.xx.xx.xx"", ""ramdisk"": null, ""region"": ""eu-central-1"", ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""running"", ""state_code"": 16, ""tags"": {""Name"": ""image_build""}, ""tenancy"": ""default"", ""virtualization_type"": ""hvm""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 560, in \n main()\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 552, in main\n create_image(module, ec2)\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 419, in create_image\n module.exit_json(msg=\""AMI creation operation complete\"", changed=True, **get_ami_info(img))\n File \""/tmp/ansible_ault7w/ansible_module_ec2_ami.py\"", line 332, in get_ami_info\n creationDate=image.creationDate,\nAttributeError: 'Image' object has no attribute 'creationDate'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE""} ``` ",1, ami module broken in ansible issue type bug report component name component ami ansible version ansible version usr site packages cryptography init py deprecationwarning python is no longer supported by the python core team please upgrade your python a future version of cryptography will drop support for python deprecationwarning ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific cat etc redhat release red hat enterprise linux server release santiago sudo pip list grep i boto deprecation python is no longer supported by the python core team please upgrade your python a future version of pip will drop support for python boto botocore summary ami module crashes and fails to save ami steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used set up an instance and try to save it using ami module name save the app server as an ami ami description app server created ansible date time instance id item id name appserver ansible date time date ansible date time hour ansible date time minute ansible date time second ansible date time tz offset scm branch stdout region item region wait yes launch permissions user ids tags git branch scm branch stdout with items instances register saved ami expected results for it to not crash actual results task an exception occurred during task execution to see the full traceback use vvv the error was attributeerror image object has no attribute creationdate failed item u kernel none u root device type u ebs u private dns name u ip eu central compute internal u public ip u xx xx xx xx u private ip u u id u i u ebs optimized false u state u running u virtualization type u hvm u architecture u u ramdisk none u block device mapping u dev sdb u status u attached u delete on termination false u volume id u vol xxxxxxxx u dev u status u attached u delete on termination true u volume id u vol xxxxxxxx u dev sdc u status u attached u delete on termination true u volume id u vol xxxxxxxx u key name u xxxxxxxx u image id u ami xxxxxxxx u tenancy u default u groups u sg xxxxxxx u image build u public dns name u u state code u tags u name u image build xxxxxxxx u placement u eu central u ami launch index u u dns name u u region u eu central u launch time u u instance type u small u root device name u dev u hypervisor u xen failed true item ami launch index architecture block device mapping dev delete on termination true status attached volume id vol xxxxxxxx dev sdb delete on termination false status attached volume id vol xxxxxxxx dev sdc delete on termination true status attached volume id vol xxxxxxxx dns name ebs optimized false groups sg xxxxxxxx image build hypervisor xen id i image id ami xxxxxxxx instance type small kernel null key name xxxxxxxx launch time placement eu central private dns name ip eu central compute internal private ip public dns name public ip xx xx xx xx ramdisk null region eu central root device name dev root device type ebs state running state code tags name image build tenancy default virtualization type hvm module stderr traceback most recent call last n file tmp ansible ansible module ami py line in n main n file tmp ansible ansible module ami py line in main n create image module n file tmp ansible ansible module ami py line in create image n module exit json msg ami creation operation complete changed true get ami info img n file tmp ansible ansible module ami py line in get ami info n creationdate image creationdate nattributeerror image object has no attribute creationdate n module stdout msg module failure ,1 810,4434218600.0,IssuesEvent,2016-08-18 01:13:02,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,get_url not using environment variable no_proxy,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME get_url ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.2 (Maipo) ##### SUMMARY When setting the environment variable no_proxy, the get_url module doesn't use it. We need to bypass our corporate proxy to get files from our central fileshare. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost vars: proxy_url: ""http://aws-proxy-us-east-1.company.com:8080"" noproxy_url: ""127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com"" url_to_get: ""https://fileshare.company.com/filename.tar"" dest_dir: ""/tmp"" tasks: - debug: msg=""proxy_url is {{ proxy_url }}"" - debug: msg=""noproxy_url is {{ noproxy_url }}"" - debug: msg=""url_to_get is {{ url_to_get }}"" - name: ""Set proxy environment variables"" set_fact: environment_vars: http_proxy: ""{{ proxy_url }}"" https_proxy: ""{{ proxy_url }}"" no_proxy: ""{{ noproxy_url }}"" HTTP_PROXY: ""{{ proxy_url }}"" HTTPS_PROXY: ""{{ proxy_url }}"" NO_PROXY: ""{{ noproxy_url }}"" - name: ""Download something with no env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf"" validate_certs: no force: yes become: no - name: ""Download something with env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf2"" validate_certs: no force: yes environment: ""{{ environment_vars }}"" become: no register: get_url_err ignore_errors: yes - debug: var=get_url_err ``` ##### EXPECTED RESULTS Both should download the file and drop in /tmp. For comparison, I ran this from command line using ""curl"". When I set the http_proxy and https_proxy, it failed with a 404 (blocked by the proxy). When I set no_proxy to our fileshare, it worked (bypassed the proxy). ##### ACTUAL RESULTS First task worked, second one says it timed out. However, when I captured the module output to a variable and dumped it, it was actually a 404. ``` TASK [Download something with env vars set] ************************************ task path: /home/ec2-user/geturltest.yml:34 ESTABLISH LOCAL CONNECTION FOR USER: ec2-user EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" && echo ansible-tmp-1465584747.19-272127080396946=""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" ) && sleep 0' PUT /tmp/tmpC_cHj6 TO /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url EXEC /bin/sh -c 'LANG=en_US.UTF-8 HTTP_PROXY=http://aws-proxy-us-east-1.company.com:8080 LC_MESSAGES=en_US.UTF-8 HTTPS_PROXY=http://aws-proxy-us-east-1.company.com:8080 NO_PROXY=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com http_proxy=http://aws-proxy-us-east-1.company.com:8080 https_proxy=http://aws-proxy-us-east-1.company.com:8080 LC_ALL=en_US.UTF-8 no_proxy=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""invocation"": {""module_args"": {""backup"": false, ""checksum"": """", ""content"": null, ""delimiter"": null, ""dest"": ""/tmp/wtf2"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""force_basic_auth"": false, ""group"": null, ""headers"": null, ""http_agent"": ""ansible-httpget"", ""mode"": null, ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""sha256sum"": """", ""src"": null, ""timeout"": 10, ""tmp_dest"": """", ""url"": ""https://fileshare.company.com/filename.tar"", ""url_password"": null, ""url_username"": null, ""use_proxy"": true, ""validate_certs"": false}, ""module_name"": ""get_url""}, ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""Request failed: "", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": -1, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar""} TASK [debug] ******************************************************************* task path: /home/ec2-user/geturltest.yml:47 ok: [localhost] => { ""get_url_err"": { ""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""HTTP Error 403: Forbidden"", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": 403, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar"" } } ``` ",True,"get_url not using environment variable no_proxy - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME get_url ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.2 (Maipo) ##### SUMMARY When setting the environment variable no_proxy, the get_url module doesn't use it. We need to bypass our corporate proxy to get files from our central fileshare. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost vars: proxy_url: ""http://aws-proxy-us-east-1.company.com:8080"" noproxy_url: ""127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com"" url_to_get: ""https://fileshare.company.com/filename.tar"" dest_dir: ""/tmp"" tasks: - debug: msg=""proxy_url is {{ proxy_url }}"" - debug: msg=""noproxy_url is {{ noproxy_url }}"" - debug: msg=""url_to_get is {{ url_to_get }}"" - name: ""Set proxy environment variables"" set_fact: environment_vars: http_proxy: ""{{ proxy_url }}"" https_proxy: ""{{ proxy_url }}"" no_proxy: ""{{ noproxy_url }}"" HTTP_PROXY: ""{{ proxy_url }}"" HTTPS_PROXY: ""{{ proxy_url }}"" NO_PROXY: ""{{ noproxy_url }}"" - name: ""Download something with no env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf"" validate_certs: no force: yes become: no - name: ""Download something with env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf2"" validate_certs: no force: yes environment: ""{{ environment_vars }}"" become: no register: get_url_err ignore_errors: yes - debug: var=get_url_err ``` ##### EXPECTED RESULTS Both should download the file and drop in /tmp. For comparison, I ran this from command line using ""curl"". When I set the http_proxy and https_proxy, it failed with a 404 (blocked by the proxy). When I set no_proxy to our fileshare, it worked (bypassed the proxy). ##### ACTUAL RESULTS First task worked, second one says it timed out. However, when I captured the module output to a variable and dumped it, it was actually a 404. ``` TASK [Download something with env vars set] ************************************ task path: /home/ec2-user/geturltest.yml:34 ESTABLISH LOCAL CONNECTION FOR USER: ec2-user EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" && echo ansible-tmp-1465584747.19-272127080396946=""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" ) && sleep 0' PUT /tmp/tmpC_cHj6 TO /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url EXEC /bin/sh -c 'LANG=en_US.UTF-8 HTTP_PROXY=http://aws-proxy-us-east-1.company.com:8080 LC_MESSAGES=en_US.UTF-8 HTTPS_PROXY=http://aws-proxy-us-east-1.company.com:8080 NO_PROXY=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com http_proxy=http://aws-proxy-us-east-1.company.com:8080 https_proxy=http://aws-proxy-us-east-1.company.com:8080 LC_ALL=en_US.UTF-8 no_proxy=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""invocation"": {""module_args"": {""backup"": false, ""checksum"": """", ""content"": null, ""delimiter"": null, ""dest"": ""/tmp/wtf2"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""force_basic_auth"": false, ""group"": null, ""headers"": null, ""http_agent"": ""ansible-httpget"", ""mode"": null, ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""sha256sum"": """", ""src"": null, ""timeout"": 10, ""tmp_dest"": """", ""url"": ""https://fileshare.company.com/filename.tar"", ""url_password"": null, ""url_username"": null, ""use_proxy"": true, ""validate_certs"": false}, ""module_name"": ""get_url""}, ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""Request failed: "", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": -1, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar""} TASK [debug] ******************************************************************* task path: /home/ec2-user/geturltest.yml:47 ok: [localhost] => { ""get_url_err"": { ""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""HTTP Error 403: Forbidden"", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": 403, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar"" } } ``` ",1,get url not using environment variable no proxy issue type bug report component name get url ansible version ansible configuration default os environment red hat enterprise linux server release maipo summary when setting the environment variable no proxy the get url module doesn t use it we need to bypass our corporate proxy to get files from our central fileshare steps to reproduce hosts localhost vars proxy url noproxy url localhost local fileshare company com url to get dest dir tmp tasks debug msg proxy url is proxy url debug msg noproxy url is noproxy url debug msg url to get is url to get name set proxy environment variables set fact environment vars http proxy proxy url https proxy proxy url no proxy noproxy url http proxy proxy url https proxy proxy url no proxy noproxy url name download something with no env vars set local action module get url url url to get dest dest dir wtf validate certs no force yes become no name download something with env vars set local action module get url url url to get dest dest dir validate certs no force yes environment environment vars become no register get url err ignore errors yes debug var get url err expected results both should download the file and drop in tmp for comparison i ran this from command line using curl when i set the http proxy and https proxy it failed with a blocked by the proxy when i set no proxy to our fileshare it worked bypassed the proxy actual results first task worked second one says it timed out however when i captured the module output to a variable and dumped it it was actually a task task path home user geturltest yml establish local connection for user user exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpc to home user ansible tmp ansible tmp get url exec bin sh c lang en us utf http proxy lc messages en us utf https proxy no proxy localhost local fileshare company com http proxy https proxy lc all en us utf no proxy localhost local fileshare company com usr bin python home user ansible tmp ansible tmp get url rm rf home user ansible tmp ansible tmp dev null sleep fatal failed changed false dest tmp failed true gid group user invocation module args backup false checksum content null delimiter null dest tmp directory mode null follow false force true force basic auth false group null headers null http agent ansible httpget mode null owner null regexp null remote src null selevel null serole null setype null seuser null src null timeout tmp dest url url password null url username null use proxy true validate certs false module name get url mode msg request failed owner user response request failed secontext unconfined u object r user tmp t size state file status code uid url task task path home user geturltest yml ok get url err changed false dest tmp failed true gid group user mode msg request failed owner user response http error forbidden secontext unconfined u object r user tmp t size state file status code uid url ,1 1754,6574970743.0,IssuesEvent,2017-09-11 14:39:06,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_config changes some items which do not need to be changed,affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.3.0 commit d9b570a config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides core modules commit ec4eebc extras modules commit edbddf6 ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **CSR-1000v 16.2.2** ##### SUMMARY There is a strange behavior when some commands are correctly applied multiple times in a row: the previous occurrences are not seen as changed. For instance, the following tasks are always applied even though they have already been successfully run: ``` - name: Configuring location, contact & chassis ID information in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server location {{ snmp.location }}"" - ""snmp-server contact {{ snmp.contact }}"" - ""snmp-server chassis-id {{ snmp.chassis_id }}"" - ""snmp-server ifindex persist"" when: snmp.admin_state == ""up"" register: result ``` ``` TASK [ios_snmp : Configuring location, contact & chassis ID information in new SNMPv2c] *** changed: [XEv_Spine_31] => {""changed"": true, ""updates"": [""snmp-server ifindex persist""], ""warnings"": []} changed: [XEv_Spine_32] => {""changed"": true, ""updates"": [""snmp-server ifindex persist""], ""warnings"": []} ``` ``` - name: Configuring RO/RW community string in new IPv4/SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server community {{ snmp.ssh.community }} {{ snmp.community_type }} {{ snmp.acl_name }}"" when: snmp.ip_version == 'ip' register: result ``` ``` TASK [ios_snmp : Configuring RO/RW community string in new IPv4/SNMPv2c] ******* changed: [XEv_Spine_32] => {""changed"": true, ""updates"": [""snmp-server community xxxxxxxx ro authorized-snmp-nms""], ""warnings"": []} changed: [XEv_Spine_31] => {""changed"": true, ""updates"": [""snmp-server community xxxxxxxx ro authorized-snmp-nms""], ""warnings"": []} ``` ``` - name: Configuring which NMS is allowed to receive the traps in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server host {{ nms_mgt_ip_address }} {{ snmp.notifications_type }} version {{ snmp.new_version }} {{ snmp.ssh.community }} udp-port {{ snmp.trap_port }}"" when: (snmp.notifications == 'all') or (snmp.notifications == 'list') register: result ``` ``` TASK [ios_snmp : Configuring which NMS is allowed to receive the traps in new SNMPv2c] *** changed: [XEv_Spine_31] => {""changed"": true, ""updates"": [""snmp-server host 172.21.100.1 traps version 2c xxxxxxxx udp-port 162""], ""warnings"": []} changed: [XEv_Spine_32] => {""changed"": true, ""updates"": [""snmp-server host 172.21.100.1 traps version 2c xxxxxxxx udp-port 162""], ""warnings"": []} ``` Not all commands are concerned with this glitch. For instance, the second passage does not lead to a change for: ``` - name: Configuring the ACL for authorized NMS in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" parents: - ""{{ snmp.ip_version }} access-list extended {{ snmp.acl_name }}"" lines: - ""permit {{ snmp.ip_version }} host {{ nms_mgt_ip_address }} host {{ ansible_host }} log"" when: snmp.admin_state == ""up"" register: result ``` ``` TASK [ios_snmp : Configuring the ACL for authorized NMS in new SNMPv2c] ******** ok: [XEv_Spine_32] => {""changed"": false, ""warnings"": []} ok: [XEv_Spine_31] => {""changed"": false, ""warnings"": []} ``` ``` - name: Configuring the list of notifications in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server enable traps {{ item }}"" with_items: ""{{ snmp.notifications_list.traps }}"" when: (snmp.notifications == 'list') register: result ``` The previous task works as expected for all the notifications I have tested, except for a few: - cpu - memory - ospf - snmp ``` TASK [ios_snmp : Configuring the list of notifications in new SNMPv2c] ********* ok: [XEv_Spine_31] => (item=bfd) => {""changed"": false, ""item"": ""bfd"", ""warnings"": []} ok: [XEv_Spine_32] => (item=bfd) => {""changed"": false, ""item"": ""bfd"", ""warnings"": []} ok: [XEv_Spine_31] => (item=bgp) => {""changed"": false, ""item"": ""bgp"", ""warnings"": []} ok: [XEv_Spine_32] => (item=bgp) => {""changed"": false, ""item"": ""bgp"", ""warnings"": []} changed: [XEv_Spine_31] => (item=cpu) => {""changed"": true, ""item"": ""cpu"", ""updates"": [""snmp-server enable traps cpu""], ""warnings"": []} changed: [XEv_Spine_32] => (item=cpu) => {""changed"": true, ""item"": ""cpu"", ""updates"": [""snmp-server enable traps cpu""], ""warnings"": []} ok: [XEv_Spine_31] => (item=eigrp) => {""changed"": false, ""item"": ""eigrp"", ""warnings"": []} ok: [XEv_Spine_32] => (item=eigrp) => {""changed"": false, ""item"": ""eigrp"", ""warnings"": []} ok: [XEv_Spine_31] => (item=event-manager) => {""changed"": false, ""item"": ""event-manager"", ""warnings"": []} ok: [XEv_Spine_32] => (item=event-manager) => {""changed"": false, ""item"": ""event-manager"", ""warnings"": []} ok: [XEv_Spine_31] => (item=firewall serverstatus) => {""changed"": false, ""item"": ""firewall serverstatus"", ""warnings"": []} ok: [XEv_Spine_32] => (item=firewall serverstatus) => {""changed"": false, ""item"": ""firewall serverstatus"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike policy add) => {""changed"": false, ""item"": ""ike policy add"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike policy add) => {""changed"": false, ""item"": ""ike policy add"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike policy delete) => {""changed"": false, ""item"": ""ike policy delete"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike policy delete) => {""changed"": false, ""item"": ""ike policy delete"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike tunnel start) => {""changed"": false, ""item"": ""ike tunnel start"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike tunnel start) => {""changed"": false, ""item"": ""ike tunnel start"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike tunnel stop) => {""changed"": false, ""item"": ""ike tunnel stop"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike tunnel stop) => {""changed"": false, ""item"": ""ike tunnel stop"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap add) => {""changed"": false, ""item"": ""ipsec cryptomap add"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap add) => {""changed"": false, ""item"": ""ipsec cryptomap add"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap attach) => {""changed"": false, ""item"": ""ipsec cryptomap attach"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap attach) => {""changed"": false, ""item"": ""ipsec cryptomap attach"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap delete) => {""changed"": false, ""item"": ""ipsec cryptomap delete"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap delete) => {""changed"": false, ""item"": ""ipsec cryptomap delete"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap detach) => {""changed"": false, ""item"": ""ipsec cryptomap detach"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap detach) => {""changed"": false, ""item"": ""ipsec cryptomap detach"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec too-many-sas) => {""changed"": false, ""item"": ""ipsec too-many-sas"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec too-many-sas) => {""changed"": false, ""item"": ""ipsec too-many-sas"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec tunnel start) => {""changed"": false, ""item"": ""ipsec tunnel start"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec tunnel start) => {""changed"": false, ""item"": ""ipsec tunnel start"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec tunnel stop) => {""changed"": false, ""item"": ""ipsec tunnel stop"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec tunnel stop) => {""changed"": false, ""item"": ""ipsec tunnel stop"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsla) => {""changed"": false, ""item"": ""ipsla"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsla) => {""changed"": false, ""item"": ""ipsla"", ""warnings"": []} ok: [XEv_Spine_32] => (item=isis) => {""changed"": false, ""item"": ""isis"", ""warnings"": []} ok: [XEv_Spine_31] => (item=isis) => {""changed"": false, ""item"": ""isis"", ""warnings"": []} changed: [XEv_Spine_32] => (item=memory) => {""changed"": true, ""item"": ""memory"", ""updates"": [""snmp-server enable traps memory""], ""warnings"": []} changed: [XEv_Spine_31] => (item=memory) => {""changed"": true, ""item"": ""memory"", ""updates"": [""snmp-server enable traps memory""], ""warnings"": []} changed: [XEv_Spine_32] => (item=ospf) => {""changed"": true, ""item"": ""ospf"", ""updates"": [""snmp-server enable traps ospf""], ""warnings"": []} changed: [XEv_Spine_31] => (item=ospf) => {""changed"": true, ""item"": ""ospf"", ""updates"": [""snmp-server enable traps ospf""], ""warnings"": []} ok: [XEv_Spine_32] => (item=ospfv3 errors) => {""changed"": false, ""item"": ""ospfv3 errors"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ospfv3 errors) => {""changed"": false, ""item"": ""ospfv3 errors"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ospfv3 rate-limit 60 150) => {""changed"": false, ""item"": ""ospfv3 rate-limit 60 150"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ospfv3 rate-limit 60 150) => {""changed"": false, ""item"": ""ospfv3 rate-limit 60 150"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ospfv3 state-change) => {""changed"": false, ""item"": ""ospfv3 state-change"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ospfv3 state-change) => {""changed"": false, ""item"": ""ospfv3 state-change"", ""warnings"": []} ok: [XEv_Spine_32] => (item=pfr) => {""changed"": false, ""item"": ""pfr"", ""warnings"": []} ok: [XEv_Spine_31] => (item=pfr) => {""changed"": false, ""item"": ""pfr"", ""warnings"": []} ok: [XEv_Spine_32] => (item=pki) => {""changed"": false, ""item"": ""pki"", ""warnings"": []} ok: [XEv_Spine_31] => (item=pki) => {""changed"": false, ""item"": ""pki"", ""warnings"": []} changed: [XEv_Spine_32] => (item=snmp) => {""changed"": true, ""item"": ""snmp"", ""updates"": [""snmp-server enable traps snmp""], ""warnings"": []} changed: [XEv_Spine_31] => (item=snmp) => {""changed"": true, ""item"": ""snmp"", ""updates"": [""snmp-server enable traps snmp""], ""warnings"": []} ok: [XEv_Spine_32] => (item=syslog) => {""changed"": false, ""item"": ""syslog"", ""warnings"": []} ok: [XEv_Spine_31] => (item=syslog) => {""changed"": false, ""item"": ""syslog"", ""warnings"": []} ok: [XEv_Spine_32] => (item=tty) => {""changed"": false, ""item"": ""tty"", ""warnings"": []} ok: [XEv_Spine_31] => (item=tty) => {""changed"": false, ""item"": ""tty"", ""warnings"": []} ``` Saving the running-config in between multiple runs does not change the situation at all. ",True,"ios_config changes some items which do not need to be changed - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.3.0 commit d9b570a config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides core modules commit ec4eebc extras modules commit edbddf6 ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - host: Ubuntu 16.04 4.4.0 - target: **CSR-1000v 16.2.2** ##### SUMMARY There is a strange behavior when some commands are correctly applied multiple times in a row: the previous occurrences are not seen as changed. For instance, the following tasks are always applied even though they have already been successfully run: ``` - name: Configuring location, contact & chassis ID information in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server location {{ snmp.location }}"" - ""snmp-server contact {{ snmp.contact }}"" - ""snmp-server chassis-id {{ snmp.chassis_id }}"" - ""snmp-server ifindex persist"" when: snmp.admin_state == ""up"" register: result ``` ``` TASK [ios_snmp : Configuring location, contact & chassis ID information in new SNMPv2c] *** changed: [XEv_Spine_31] => {""changed"": true, ""updates"": [""snmp-server ifindex persist""], ""warnings"": []} changed: [XEv_Spine_32] => {""changed"": true, ""updates"": [""snmp-server ifindex persist""], ""warnings"": []} ``` ``` - name: Configuring RO/RW community string in new IPv4/SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server community {{ snmp.ssh.community }} {{ snmp.community_type }} {{ snmp.acl_name }}"" when: snmp.ip_version == 'ip' register: result ``` ``` TASK [ios_snmp : Configuring RO/RW community string in new IPv4/SNMPv2c] ******* changed: [XEv_Spine_32] => {""changed"": true, ""updates"": [""snmp-server community xxxxxxxx ro authorized-snmp-nms""], ""warnings"": []} changed: [XEv_Spine_31] => {""changed"": true, ""updates"": [""snmp-server community xxxxxxxx ro authorized-snmp-nms""], ""warnings"": []} ``` ``` - name: Configuring which NMS is allowed to receive the traps in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server host {{ nms_mgt_ip_address }} {{ snmp.notifications_type }} version {{ snmp.new_version }} {{ snmp.ssh.community }} udp-port {{ snmp.trap_port }}"" when: (snmp.notifications == 'all') or (snmp.notifications == 'list') register: result ``` ``` TASK [ios_snmp : Configuring which NMS is allowed to receive the traps in new SNMPv2c] *** changed: [XEv_Spine_31] => {""changed"": true, ""updates"": [""snmp-server host 172.21.100.1 traps version 2c xxxxxxxx udp-port 162""], ""warnings"": []} changed: [XEv_Spine_32] => {""changed"": true, ""updates"": [""snmp-server host 172.21.100.1 traps version 2c xxxxxxxx udp-port 162""], ""warnings"": []} ``` Not all commands are concerned with this glitch. For instance, the second passage does not lead to a change for: ``` - name: Configuring the ACL for authorized NMS in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" parents: - ""{{ snmp.ip_version }} access-list extended {{ snmp.acl_name }}"" lines: - ""permit {{ snmp.ip_version }} host {{ nms_mgt_ip_address }} host {{ ansible_host }} log"" when: snmp.admin_state == ""up"" register: result ``` ``` TASK [ios_snmp : Configuring the ACL for authorized NMS in new SNMPv2c] ******** ok: [XEv_Spine_32] => {""changed"": false, ""warnings"": []} ok: [XEv_Spine_31] => {""changed"": false, ""warnings"": []} ``` ``` - name: Configuring the list of notifications in new SNMPv{{ snmp.new_version }} ios_config: provider: ""{{ connections.ssh }}"" lines: - ""snmp-server enable traps {{ item }}"" with_items: ""{{ snmp.notifications_list.traps }}"" when: (snmp.notifications == 'list') register: result ``` The previous task works as expected for all the notifications I have tested, except for a few: - cpu - memory - ospf - snmp ``` TASK [ios_snmp : Configuring the list of notifications in new SNMPv2c] ********* ok: [XEv_Spine_31] => (item=bfd) => {""changed"": false, ""item"": ""bfd"", ""warnings"": []} ok: [XEv_Spine_32] => (item=bfd) => {""changed"": false, ""item"": ""bfd"", ""warnings"": []} ok: [XEv_Spine_31] => (item=bgp) => {""changed"": false, ""item"": ""bgp"", ""warnings"": []} ok: [XEv_Spine_32] => (item=bgp) => {""changed"": false, ""item"": ""bgp"", ""warnings"": []} changed: [XEv_Spine_31] => (item=cpu) => {""changed"": true, ""item"": ""cpu"", ""updates"": [""snmp-server enable traps cpu""], ""warnings"": []} changed: [XEv_Spine_32] => (item=cpu) => {""changed"": true, ""item"": ""cpu"", ""updates"": [""snmp-server enable traps cpu""], ""warnings"": []} ok: [XEv_Spine_31] => (item=eigrp) => {""changed"": false, ""item"": ""eigrp"", ""warnings"": []} ok: [XEv_Spine_32] => (item=eigrp) => {""changed"": false, ""item"": ""eigrp"", ""warnings"": []} ok: [XEv_Spine_31] => (item=event-manager) => {""changed"": false, ""item"": ""event-manager"", ""warnings"": []} ok: [XEv_Spine_32] => (item=event-manager) => {""changed"": false, ""item"": ""event-manager"", ""warnings"": []} ok: [XEv_Spine_31] => (item=firewall serverstatus) => {""changed"": false, ""item"": ""firewall serverstatus"", ""warnings"": []} ok: [XEv_Spine_32] => (item=firewall serverstatus) => {""changed"": false, ""item"": ""firewall serverstatus"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike policy add) => {""changed"": false, ""item"": ""ike policy add"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike policy add) => {""changed"": false, ""item"": ""ike policy add"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike policy delete) => {""changed"": false, ""item"": ""ike policy delete"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike policy delete) => {""changed"": false, ""item"": ""ike policy delete"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike tunnel start) => {""changed"": false, ""item"": ""ike tunnel start"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike tunnel start) => {""changed"": false, ""item"": ""ike tunnel start"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ike tunnel stop) => {""changed"": false, ""item"": ""ike tunnel stop"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ike tunnel stop) => {""changed"": false, ""item"": ""ike tunnel stop"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap add) => {""changed"": false, ""item"": ""ipsec cryptomap add"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap add) => {""changed"": false, ""item"": ""ipsec cryptomap add"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap attach) => {""changed"": false, ""item"": ""ipsec cryptomap attach"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap attach) => {""changed"": false, ""item"": ""ipsec cryptomap attach"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap delete) => {""changed"": false, ""item"": ""ipsec cryptomap delete"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap delete) => {""changed"": false, ""item"": ""ipsec cryptomap delete"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec cryptomap detach) => {""changed"": false, ""item"": ""ipsec cryptomap detach"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec cryptomap detach) => {""changed"": false, ""item"": ""ipsec cryptomap detach"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec too-many-sas) => {""changed"": false, ""item"": ""ipsec too-many-sas"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec too-many-sas) => {""changed"": false, ""item"": ""ipsec too-many-sas"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec tunnel start) => {""changed"": false, ""item"": ""ipsec tunnel start"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec tunnel start) => {""changed"": false, ""item"": ""ipsec tunnel start"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsec tunnel stop) => {""changed"": false, ""item"": ""ipsec tunnel stop"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsec tunnel stop) => {""changed"": false, ""item"": ""ipsec tunnel stop"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ipsla) => {""changed"": false, ""item"": ""ipsla"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ipsla) => {""changed"": false, ""item"": ""ipsla"", ""warnings"": []} ok: [XEv_Spine_32] => (item=isis) => {""changed"": false, ""item"": ""isis"", ""warnings"": []} ok: [XEv_Spine_31] => (item=isis) => {""changed"": false, ""item"": ""isis"", ""warnings"": []} changed: [XEv_Spine_32] => (item=memory) => {""changed"": true, ""item"": ""memory"", ""updates"": [""snmp-server enable traps memory""], ""warnings"": []} changed: [XEv_Spine_31] => (item=memory) => {""changed"": true, ""item"": ""memory"", ""updates"": [""snmp-server enable traps memory""], ""warnings"": []} changed: [XEv_Spine_32] => (item=ospf) => {""changed"": true, ""item"": ""ospf"", ""updates"": [""snmp-server enable traps ospf""], ""warnings"": []} changed: [XEv_Spine_31] => (item=ospf) => {""changed"": true, ""item"": ""ospf"", ""updates"": [""snmp-server enable traps ospf""], ""warnings"": []} ok: [XEv_Spine_32] => (item=ospfv3 errors) => {""changed"": false, ""item"": ""ospfv3 errors"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ospfv3 errors) => {""changed"": false, ""item"": ""ospfv3 errors"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ospfv3 rate-limit 60 150) => {""changed"": false, ""item"": ""ospfv3 rate-limit 60 150"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ospfv3 rate-limit 60 150) => {""changed"": false, ""item"": ""ospfv3 rate-limit 60 150"", ""warnings"": []} ok: [XEv_Spine_32] => (item=ospfv3 state-change) => {""changed"": false, ""item"": ""ospfv3 state-change"", ""warnings"": []} ok: [XEv_Spine_31] => (item=ospfv3 state-change) => {""changed"": false, ""item"": ""ospfv3 state-change"", ""warnings"": []} ok: [XEv_Spine_32] => (item=pfr) => {""changed"": false, ""item"": ""pfr"", ""warnings"": []} ok: [XEv_Spine_31] => (item=pfr) => {""changed"": false, ""item"": ""pfr"", ""warnings"": []} ok: [XEv_Spine_32] => (item=pki) => {""changed"": false, ""item"": ""pki"", ""warnings"": []} ok: [XEv_Spine_31] => (item=pki) => {""changed"": false, ""item"": ""pki"", ""warnings"": []} changed: [XEv_Spine_32] => (item=snmp) => {""changed"": true, ""item"": ""snmp"", ""updates"": [""snmp-server enable traps snmp""], ""warnings"": []} changed: [XEv_Spine_31] => (item=snmp) => {""changed"": true, ""item"": ""snmp"", ""updates"": [""snmp-server enable traps snmp""], ""warnings"": []} ok: [XEv_Spine_32] => (item=syslog) => {""changed"": false, ""item"": ""syslog"", ""warnings"": []} ok: [XEv_Spine_31] => (item=syslog) => {""changed"": false, ""item"": ""syslog"", ""warnings"": []} ok: [XEv_Spine_32] => (item=tty) => {""changed"": false, ""item"": ""tty"", ""warnings"": []} ok: [XEv_Spine_31] => (item=tty) => {""changed"": false, ""item"": ""tty"", ""warnings"": []} ``` Saving the running-config in between multiple runs does not change the situation at all. ",1,ios config changes some items which do not need to be changed issue type bug report component name ios config ansible version ansible commit config file etc ansible ansible cfg configured module search path default w o overrides core modules commit extras modules commit configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible git ansible roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment host ubuntu target csr summary there is a strange behavior when some commands are correctly applied multiple times in a row the previous occurrences are not seen as changed for instance the following tasks are always applied even though they have already been successfully run name configuring location contact chassis id information in new snmpv snmp new version ios config provider connections ssh lines snmp server location snmp location snmp server contact snmp contact snmp server chassis id snmp chassis id snmp server ifindex persist when snmp admin state up register result task changed changed true updates warnings changed changed true updates warnings name configuring ro rw community string in new snmpv snmp new version ios config provider connections ssh lines snmp server community snmp ssh community snmp community type snmp acl name when snmp ip version ip register result task changed changed true updates warnings changed changed true updates warnings name configuring which nms is allowed to receive the traps in new snmpv snmp new version ios config provider connections ssh lines snmp server host nms mgt ip address snmp notifications type version snmp new version snmp ssh community udp port snmp trap port when snmp notifications all or snmp notifications list register result task changed changed true updates warnings changed changed true updates warnings not all commands are concerned with this glitch for instance the second passage does not lead to a change for name configuring the acl for authorized nms in new snmpv snmp new version ios config provider connections ssh parents snmp ip version access list extended snmp acl name lines permit snmp ip version host nms mgt ip address host ansible host log when snmp admin state up register result task ok changed false warnings ok changed false warnings name configuring the list of notifications in new snmpv snmp new version ios config provider connections ssh lines snmp server enable traps item with items snmp notifications list traps when snmp notifications list register result the previous task works as expected for all the notifications i have tested except for a few cpu memory ospf snmp task ok item bfd changed false item bfd warnings ok item bfd changed false item bfd warnings ok item bgp changed false item bgp warnings ok item bgp changed false item bgp warnings changed item cpu changed true item cpu updates warnings changed item cpu changed true item cpu updates warnings ok item eigrp changed false item eigrp warnings ok item eigrp changed false item eigrp warnings ok item event manager changed false item event manager warnings ok item event manager changed false item event manager warnings ok item firewall serverstatus changed false item firewall serverstatus warnings ok item firewall serverstatus changed false item firewall serverstatus warnings ok item ike policy add changed false item ike policy add warnings ok item ike policy add changed false item ike policy add warnings ok item ike policy delete changed false item ike policy delete warnings ok item ike policy delete changed false item ike policy delete warnings ok item ike tunnel start changed false item ike tunnel start warnings ok item ike tunnel start changed false item ike tunnel start warnings ok item ike tunnel stop changed false item ike tunnel stop warnings ok item ike tunnel stop changed false item ike tunnel stop warnings ok item ipsec cryptomap add changed false item ipsec cryptomap add warnings ok item ipsec cryptomap add changed false item ipsec cryptomap add warnings ok item ipsec cryptomap attach changed false item ipsec cryptomap attach warnings ok item ipsec cryptomap attach changed false item ipsec cryptomap attach warnings ok item ipsec cryptomap delete changed false item ipsec cryptomap delete warnings ok item ipsec cryptomap delete changed false item ipsec cryptomap delete warnings ok item ipsec cryptomap detach changed false item ipsec cryptomap detach warnings ok item ipsec cryptomap detach changed false item ipsec cryptomap detach warnings ok item ipsec too many sas changed false item ipsec too many sas warnings ok item ipsec too many sas changed false item ipsec too many sas warnings ok item ipsec tunnel start changed false item ipsec tunnel start warnings ok item ipsec tunnel start changed false item ipsec tunnel start warnings ok item ipsec tunnel stop changed false item ipsec tunnel stop warnings ok item ipsec tunnel stop changed false item ipsec tunnel stop warnings ok item ipsla changed false item ipsla warnings ok item ipsla changed false item ipsla warnings ok item isis changed false item isis warnings ok item isis changed false item isis warnings changed item memory changed true item memory updates warnings changed item memory changed true item memory updates warnings changed item ospf changed true item ospf updates warnings changed item ospf changed true item ospf updates warnings ok item errors changed false item errors warnings ok item errors changed false item errors warnings ok item rate limit changed false item rate limit warnings ok item rate limit changed false item rate limit warnings ok item state change changed false item state change warnings ok item state change changed false item state change warnings ok item pfr changed false item pfr warnings ok item pfr changed false item pfr warnings ok item pki changed false item pki warnings ok item pki changed false item pki warnings changed item snmp changed true item snmp updates warnings changed item snmp changed true item snmp updates warnings ok item syslog changed false item syslog warnings ok item syslog changed false item syslog warnings ok item tty changed false item tty warnings ok item tty changed false item tty warnings saving the running config in between multiple runs does not change the situation at all ,1 967,4707894660.0,IssuesEvent,2016-10-13 21:31:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,VMware Template playbook options,affects_2.2 bug_report cloud P2 vmware waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY Hi I am setting up a new VMware virtual machine from a template using a Ansible playbook. I want to be able to change the datastore and the network (vm_disk and vm_nic) of the VM during the setup of the VM. But when I add this information into the playbook (see below), nothing happens. The new VM is created successfully and Ansible returns a success, but the datastore and network have not been adjust to what I requested in the playbook. They have remained the same as what the template image is. Am I doing something incorrect in the playbook? Or is this not possible with Ansible? Playbook (highlighted in bold is what is not being adjusted) `--- - hosts: 127.0.0.1 connection: local user: root sudo: false gather_facts: false serial: 1 vars: vcenter_hostname: server.local esxhost: 172.25.10.10 nic_type: e1000e network: Web Servers network_type: standard vmcluster: UK-CLUSTER username: admin password: password folder: Utilities notes: Created by Ansible tasks: - name: Create VM from template vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ username }}"" password: ""{{ password }}"" guest: ""{{ name }}"" vm_extra_config: notes: ""{{ notes }}"" folder: ""{{ folder }}"" from_template: yes template_src: ""{{ vmtemplate }}"" cluster: ""{{ vmcluster }}"" vm_disk: disk1: type: ""{{ disktype }}"" datastore: ""{{ datastore }}"" vm_nic: nic1: type: ""{{ nic_type }}"" network: ""{{ network }}"" network_type: ""{{ network_type }}"" resource_pool: ""/Resources"" esxi: datacenter: UK hostname: ""{{ esxhost }}""` If I look at the example on the Ansible website, it doesn't look like it gives the option to allow this unless you setup a VM from an ISO file. (http://docs.ansible.com/ansible/vsphere_guest_module.html) I want to have the same functionality if I use a template. Cheers ",True,"VMware Template playbook options - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY Hi I am setting up a new VMware virtual machine from a template using a Ansible playbook. I want to be able to change the datastore and the network (vm_disk and vm_nic) of the VM during the setup of the VM. But when I add this information into the playbook (see below), nothing happens. The new VM is created successfully and Ansible returns a success, but the datastore and network have not been adjust to what I requested in the playbook. They have remained the same as what the template image is. Am I doing something incorrect in the playbook? Or is this not possible with Ansible? Playbook (highlighted in bold is what is not being adjusted) `--- - hosts: 127.0.0.1 connection: local user: root sudo: false gather_facts: false serial: 1 vars: vcenter_hostname: server.local esxhost: 172.25.10.10 nic_type: e1000e network: Web Servers network_type: standard vmcluster: UK-CLUSTER username: admin password: password folder: Utilities notes: Created by Ansible tasks: - name: Create VM from template vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ username }}"" password: ""{{ password }}"" guest: ""{{ name }}"" vm_extra_config: notes: ""{{ notes }}"" folder: ""{{ folder }}"" from_template: yes template_src: ""{{ vmtemplate }}"" cluster: ""{{ vmcluster }}"" vm_disk: disk1: type: ""{{ disktype }}"" datastore: ""{{ datastore }}"" vm_nic: nic1: type: ""{{ nic_type }}"" network: ""{{ network }}"" network_type: ""{{ network_type }}"" resource_pool: ""/Resources"" esxi: datacenter: UK hostname: ""{{ esxhost }}""` If I look at the example on the Ansible website, it doesn't look like it gives the option to allow this unless you setup a VM from an ISO file. (http://docs.ansible.com/ansible/vsphere_guest_module.html) I want to have the same functionality if I use a template. Cheers ",1,vmware template playbook options issue type bug report component name vsphere guest ansible version n a summary hi i am setting up a new vmware virtual machine from a template using a ansible playbook i want to be able to change the datastore and the network vm disk and vm nic of the vm during the setup of the vm but when i add this information into the playbook see below nothing happens the new vm is created successfully and ansible returns a success but the datastore and network have not been adjust to what i requested in the playbook they have remained the same as what the template image is am i doing something incorrect in the playbook or is this not possible with ansible playbook highlighted in bold is what is not being adjusted hosts connection local user root sudo false gather facts false serial vars vcenter hostname server local esxhost nic type network web servers network type standard vmcluster uk cluster username admin password password folder utilities notes created by ansible tasks name create vm from template vsphere guest vcenter hostname vcenter hostname username username password password guest name vm extra config notes notes folder folder from template yes template src vmtemplate cluster vmcluster vm disk type disktype datastore datastore vm nic type nic type network network network type network type resource pool resources esxi datacenter uk hostname esxhost if i look at the example on the ansible website it doesn t look like it gives the option to allow this unless you setup a vm from an iso file i want to have the same functionality if i use a template cheers ,1 1130,4998415563.0,IssuesEvent,2016-12-09 19:47:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,replace: TypeError when run under Python 3,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - files/replace ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (detached HEAD cdec853e37) last updated 2016/12/01 10:23:30 (GMT +200) lib/ansible/modules/core: (detached HEAD fe9c56a003) last updated 2016/12/01 10:24:42 (GMT +200) lib/ansible/modules/extras: (detached HEAD f564e77a08) last updated 2016/12/01 10:24:42 (GMT +200) config file = configured module search path = ['/Users/per/Projects/servers/submodules/ansible/library'] ``` ##### CONFIGURATION - ansible_python_interpreter=/usr/bin/python3 ##### OS / ENVIRONMENT - Local: MacOS - Remote: Ubuntu 16.04, with Python 3.5.2 ##### SUMMARY Replace fails with a TypeError when run under Python 3 ##### STEPS TO REPRODUCE ``` ansible *** -m replace \ -a ""dest=/etc/lsb-release regexp=nomatchfound replace=nomatchfound"" ``` ##### EXPECTED RESULTS No replacement (since regex doesn't match). ##### ACTUAL RESULTS Command fails with error: `TypeError: cannot use a string pattern on a bytes-like object` ``` *** | FAILED! => { ""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to *** closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in \r\n main()\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main\r\n result = re.subn(mre, params['replace'], contents, 0)\r\n File \""/usr/lib/python3.5/re.py\"", line 193, in subn\r\n return _compile(pattern, flags).subn(repl, string, count)\r\nTypeError: cannot use a string pattern on a bytes-like object\r\n"", ""msg"": ""MODULE FAILURE"" } ``` Readable traceback (from above): ``` Traceback (most recent call last): File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in main() File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main result = re.subn(mre, params['replace'], contents, 0) File \""/usr/lib/python3.5/re.py\"", line 193, in subn return _compile(pattern, flags).subn(repl, string, count) TypeError: cannot use a string pattern on a bytes-like object ``` ",True,"replace: TypeError when run under Python 3 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - files/replace ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (detached HEAD cdec853e37) last updated 2016/12/01 10:23:30 (GMT +200) lib/ansible/modules/core: (detached HEAD fe9c56a003) last updated 2016/12/01 10:24:42 (GMT +200) lib/ansible/modules/extras: (detached HEAD f564e77a08) last updated 2016/12/01 10:24:42 (GMT +200) config file = configured module search path = ['/Users/per/Projects/servers/submodules/ansible/library'] ``` ##### CONFIGURATION - ansible_python_interpreter=/usr/bin/python3 ##### OS / ENVIRONMENT - Local: MacOS - Remote: Ubuntu 16.04, with Python 3.5.2 ##### SUMMARY Replace fails with a TypeError when run under Python 3 ##### STEPS TO REPRODUCE ``` ansible *** -m replace \ -a ""dest=/etc/lsb-release regexp=nomatchfound replace=nomatchfound"" ``` ##### EXPECTED RESULTS No replacement (since regex doesn't match). ##### ACTUAL RESULTS Command fails with error: `TypeError: cannot use a string pattern on a bytes-like object` ``` *** | FAILED! => { ""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to *** closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in \r\n main()\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main\r\n result = re.subn(mre, params['replace'], contents, 0)\r\n File \""/usr/lib/python3.5/re.py\"", line 193, in subn\r\n return _compile(pattern, flags).subn(repl, string, count)\r\nTypeError: cannot use a string pattern on a bytes-like object\r\n"", ""msg"": ""MODULE FAILURE"" } ``` Readable traceback (from above): ``` Traceback (most recent call last): File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in main() File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main result = re.subn(mre, params['replace'], contents, 0) File \""/usr/lib/python3.5/re.py\"", line 193, in subn return _compile(pattern, flags).subn(repl, string, count) TypeError: cannot use a string pattern on a bytes-like object ``` ",1,replace typeerror when run under python issue type bug report component name files replace ansible version ansible detached head last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path configuration ansible python interpreter usr bin os environment local macos remote ubuntu with python summary replace fails with a typeerror when run under python steps to reproduce ansible m replace a dest etc lsb release regexp nomatchfound replace nomatchfound expected results no replacement since regex doesn t match actual results command fails with error typeerror cannot use a string pattern on a bytes like object failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module replace py line in r n main r n file tmp ansible ansible module replace py line in main r n result re subn mre params contents r n file usr lib re py line in subn r n return compile pattern flags subn repl string count r ntypeerror cannot use a string pattern on a bytes like object r n msg module failure readable traceback from above traceback most recent call last file tmp ansible ansible module replace py line in main file tmp ansible ansible module replace py line in main result re subn mre params contents file usr lib re py line in subn return compile pattern flags subn repl string count typeerror cannot use a string pattern on a bytes like object ,1 1028,4822175623.0,IssuesEvent,2016-11-05 18:25:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,pip does not working with virtualenv,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pip ##### ANSIBLE VERSION ``` 2.2.0.0 ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT Debian Jessie ##### SUMMARY PIP does not work with pyvenv-3.5 after update to ansible 2.2 ##### STEPS TO REPRODUCE Task ``` - name: Update setuptools and pip become: true become_user: ""www-data"" pip: name={{ item }} state=latest virtualenv=/www/env/site virtualenv_command=pyvenv-3.5 with_items: - pip - setuptools - wheel ``` ##### EXPECTED RESULTS Successful update of pip and others in /www/env/site virtualenv ##### ACTUAL RESULTS Fails on update system pip2 ``` failed: [test] (item=setuptools) => {""cmd"": ""/usr/local/bin/pip2 install -U setuptools"", ""failed"": true, ""item"": ""setuptools"", ""msg"": ""stdout: Collecting setuptools\n Using cached setuptools-28.7.1-py2.py3-none-any.whl\nInstalling collec ted packages: setuptools\n Found existing installation: setuptools 25.1.6\n Uninstalling setuptools-25.1.6:\n\n:stderr: Exception:\nTraceback (most recent call last):\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.eg g/pip/basecommand.py\"", line 215, in main\n status = self.run(options, args)\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py\"", line 317, in run\n prefix=options.prefix_path,\n File \""/u sr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py\"", line 736, in install\n requirement.uninstall(auto_confirm=True)\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py\ "", line 742, in uninstall\n paths_to_remove.remove(auto_confirm)\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py\"", line 115, in remove\n renames(path, new_path)\n File \""/usr/local/lib /python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py\"", line 267, in renames\n shutil.move(old, new)\n File \""/usr/lib/python2.7/shutil.py\"", line 303, in move\n os.unlink(src)\nOSError: [Errno 13] Permission denied: '/usr/local/bin/easy_install'\n""} ``` ",True,"pip does not working with virtualenv - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pip ##### ANSIBLE VERSION ``` 2.2.0.0 ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT Debian Jessie ##### SUMMARY PIP does not work with pyvenv-3.5 after update to ansible 2.2 ##### STEPS TO REPRODUCE Task ``` - name: Update setuptools and pip become: true become_user: ""www-data"" pip: name={{ item }} state=latest virtualenv=/www/env/site virtualenv_command=pyvenv-3.5 with_items: - pip - setuptools - wheel ``` ##### EXPECTED RESULTS Successful update of pip and others in /www/env/site virtualenv ##### ACTUAL RESULTS Fails on update system pip2 ``` failed: [test] (item=setuptools) => {""cmd"": ""/usr/local/bin/pip2 install -U setuptools"", ""failed"": true, ""item"": ""setuptools"", ""msg"": ""stdout: Collecting setuptools\n Using cached setuptools-28.7.1-py2.py3-none-any.whl\nInstalling collec ted packages: setuptools\n Found existing installation: setuptools 25.1.6\n Uninstalling setuptools-25.1.6:\n\n:stderr: Exception:\nTraceback (most recent call last):\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.eg g/pip/basecommand.py\"", line 215, in main\n status = self.run(options, args)\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py\"", line 317, in run\n prefix=options.prefix_path,\n File \""/u sr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py\"", line 736, in install\n requirement.uninstall(auto_confirm=True)\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py\ "", line 742, in uninstall\n paths_to_remove.remove(auto_confirm)\n File \""/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py\"", line 115, in remove\n renames(path, new_path)\n File \""/usr/local/lib /python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py\"", line 267, in renames\n shutil.move(old, new)\n File \""/usr/lib/python2.7/shutil.py\"", line 303, in move\n os.unlink(src)\nOSError: [Errno 13] Permission denied: '/usr/local/bin/easy_install'\n""} ``` ",1,pip does not working with virtualenv issue type bug report component name pip ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific debian jessie summary pip does not work with pyvenv after update to ansible steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used task name update setuptools and pip become true become user www data pip name item state latest virtualenv www env site virtualenv command pyvenv with items pip setuptools wheel expected results successful update of pip and others in www env site virtualenv actual results fails on update system failed item setuptools cmd usr local bin install u setuptools failed true item setuptools msg stdout collecting setuptools n using cached setuptools none any whl ninstalling collec ted packages setuptools n found existing installation setuptools n uninstalling setuptools n n stderr exception ntraceback most recent call last n file usr local lib dist packages pip eg g pip basecommand py line in main n status self run options args n file usr local lib dist packages pip egg pip commands install py line in run n prefix options prefix path n file u sr local lib dist packages pip egg pip req req set py line in install n requirement uninstall auto confirm true n file usr local lib dist packages pip egg pip req req install py line in uninstall n paths to remove remove auto confirm n file usr local lib dist packages pip egg pip req req uninstall py line in remove n renames path new path n file usr local lib dist packages pip egg pip utils init py line in renames n shutil move old new n file usr lib shutil py line in move n os unlink src noserror permission denied usr local bin easy install n ,1 1755,6574971624.0,IssuesEvent,2017-09-11 14:39:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container - missing cap_drop,affects_2.1 cloud docker feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_container module ##### ANSIBLE VERSION ansible 2.1.2.0 ##### SUMMARY docker_container module does not support removing capabilities (i.e. docker --cap-drop option). cap_drop and cap_add were added to the docker module in ansible 2.0 but that module is marked as deprecated now. ",True,"docker_container - missing cap_drop - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_container module ##### ANSIBLE VERSION ansible 2.1.2.0 ##### SUMMARY docker_container module does not support removing capabilities (i.e. docker --cap-drop option). cap_drop and cap_add were added to the docker module in ansible 2.0 but that module is marked as deprecated now. ",1,docker container missing cap drop issue type feature idea component name docker container module ansible version ansible summary docker container module does not support removing capabilities i e docker cap drop option cap drop and cap add were added to the docker module in ansible but that module is marked as deprecated now ,1 957,4702293769.0,IssuesEvent,2016-10-13 01:20:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,pn_vlan.py parameter parsing error,affects_2.3 bug_report networking P1 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pn_vlan ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel aa1ec8af17) last updated 2016/10/07 11:07:23 (GMT +100) lib/ansible/modules/core: (devel 149f10f8b7) last updated 2016/10/07 11:07:26 (GMT +100) lib/ansible/modules/extras: (devel cc2651422a) last updated 2016/10/07 11:07:27 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/blob/devel/network/netvisor/pn_vlan.py#L225 contains ```python arguement_spec = pn_arguement_spec ``` which gives a NameError exception ##### STEPS TO REPRODUCE ```python ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",True,"pn_vlan.py parameter parsing error - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pn_vlan ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel aa1ec8af17) last updated 2016/10/07 11:07:23 (GMT +100) lib/ansible/modules/core: (devel 149f10f8b7) last updated 2016/10/07 11:07:26 (GMT +100) lib/ansible/modules/extras: (devel cc2651422a) last updated 2016/10/07 11:07:27 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/blob/devel/network/netvisor/pn_vlan.py#L225 contains ```python arguement_spec = pn_arguement_spec ``` which gives a NameError exception ##### STEPS TO REPRODUCE ```python ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",1,pn vlan py parameter parsing error issue type bug report component name pn vlan ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary contains python arguement spec pn arguement spec which gives a nameerror exception steps to reproduce python expected results actual results ,1 1560,6572254684.0,IssuesEvent,2017-09-11 00:39:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,please make lineinfile support edit in-place,affects_2.3 feature_idea waiting_on_maintainer,"##### ISSUE TYPE Feature Idea ##### COMPONENT NAME lineinfile module ##### ANSIBLE VERSION N/A ##### SUMMARY Docker makes /etc/resolv.conf a mount point, so that we can't do ""mv some_tmp /etc/resolv.conf"", ""lineinfile"" fails to update /etc/resolv.conf. Currently I use module ""shell"" to bypass this issue, but this is long and ugly, and I don't like Ansible always reports something changed, please consider adding an ""in-place"" option to ""lineinfile"", thanks! ",True,"please make lineinfile support edit in-place - ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME lineinfile module ##### ANSIBLE VERSION N/A ##### SUMMARY Docker makes /etc/resolv.conf a mount point, so that we can't do ""mv some_tmp /etc/resolv.conf"", ""lineinfile"" fails to update /etc/resolv.conf. Currently I use module ""shell"" to bypass this issue, but this is long and ugly, and I don't like Ansible always reports something changed, please consider adding an ""in-place"" option to ""lineinfile"", thanks! ",1,please make lineinfile support edit in place issue type feature idea component name lineinfile module ansible version n a summary docker makes etc resolv conf a mount point so that we can t do mv some tmp etc resolv conf lineinfile fails to update etc resolv conf currently i use module shell to bypass this issue but this is long and ugly and i don t like ansible always reports something changed please consider adding an in place option to lineinfile thanks ,1 1738,6574876376.0,IssuesEvent,2017-09-11 14:21:55,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,template action could use ignore_regexp to skip a template change if the change matches the given regexp,affects_2.2 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME template ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### SUMMARY For certain configuration like bind database, we use ansible_date_time.epoch as the serial number, like this ``` {{ ansible_date_time.epoch }} ; Serial ``` And a change to the db file will trigger the bind service to be restarted. It works fine if there is real change. But sometimes the only change is the epoch thus the serial number change, in which case, we don't really want to proceed. So we would like to propose a ignore_regexp setting to template(or copy) action, once this attribute is set, when template action is performed, such regexp will be used to filter out the lines in the template output and the existing file, so the action will be only considered **changed** if the filtered output and existing file are still different. I think such a change will not only benefit our use case, but will also help address issues like https://github.com/ansible/ansible/issues/5317. ",True,"template action could use ignore_regexp to skip a template change if the change matches the given regexp - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME template ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### SUMMARY For certain configuration like bind database, we use ansible_date_time.epoch as the serial number, like this ``` {{ ansible_date_time.epoch }} ; Serial ``` And a change to the db file will trigger the bind service to be restarted. It works fine if there is real change. But sometimes the only change is the epoch thus the serial number change, in which case, we don't really want to proceed. So we would like to propose a ignore_regexp setting to template(or copy) action, once this attribute is set, when template action is performed, such regexp will be used to filter out the lines in the template output and the existing file, so the action will be only considered **changed** if the filtered output and existing file are still different. I think such a change will not only benefit our use case, but will also help address issues like https://github.com/ansible/ansible/issues/5317. ",1,template action could use ignore regexp to skip a template change if the change matches the given regexp issue type feature idea component name template ansible version ansible summary for certain configuration like bind database we use ansible date time epoch as the serial number like this ansible date time epoch serial and a change to the db file will trigger the bind service to be restarted it works fine if there is real change but sometimes the only change is the epoch thus the serial number change in which case we don t really want to proceed so we would like to propose a ignore regexp setting to template or copy action once this attribute is set when template action is performed such regexp will be used to filter out the lines in the template output and the existing file so the action will be only considered changed if the filtered output and existing file are still different i think such a change will not only benefit our use case but will also help address issues like ,1 1796,6575902631.0,IssuesEvent,2017-09-11 17:46:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_ami_find fails due missing 'creationDate' when run against a Helion Eucalyptus EC2 cloud,affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami_find ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION in ./ansible.cfg: ``` [defaults] jinja2_extensions = jinja2.ext.with_ ``` ##### OS / ENVIRONMENT Ubuntu 16.04 ##### SUMMARY The `ec2_ami_find` module fails when run against a Helion Eucalyptus EC2 cloud with error on missing 'creationDate' ``` Traceback (most recent call last): File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 423, in main() File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 377, in main 'creationDate': image.creationDate, AttributeError: 'Image' object has no attribute 'creationDate' ``` Reason is that the Helion Eucalyptus does not report a ""creationDate"" for images. I verified this by observing the HTTP response from Eucalyptus. Also in the command `euca-describe-images` there is no mentioning of a creation Date, see [latest docs](http://docs.hpcloud.com/eucalyptus/4.3.0/#euca2ools-guide/euca-describe-images.html). ##### STEPS TO REPRODUCE Just run `ec2_ami_find` against a Eucalyptus EC2 endpoint. ``` - hosts: all connection: local gather_facts: false tasks: - name: Find images ec2_ami_find: ec2_url: '{{ ec2_url }}' # use Eucalyptus endpoint aws_access_key: '{{ aws_access_key }}' aws_secret_key: '{{ aws_secret_key }}' name: some-name* register: images - debug: msg: '{{ images }}' ``` ##### EXPECTED RESULTS Successful run of `ec2_ami_find`, printed list of images ##### ACTUAL RESULTS Traceback of failure. ``` Traceback (most recent call last): File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 423, in main() File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 377, in main 'creationDate': image.creationDate, AttributeError: 'Image' object has no attribute 'creationDate' ``` ",True,"ec2_ami_find fails due missing 'creationDate' when run against a Helion Eucalyptus EC2 cloud - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami_find ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION in ./ansible.cfg: ``` [defaults] jinja2_extensions = jinja2.ext.with_ ``` ##### OS / ENVIRONMENT Ubuntu 16.04 ##### SUMMARY The `ec2_ami_find` module fails when run against a Helion Eucalyptus EC2 cloud with error on missing 'creationDate' ``` Traceback (most recent call last): File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 423, in main() File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 377, in main 'creationDate': image.creationDate, AttributeError: 'Image' object has no attribute 'creationDate' ``` Reason is that the Helion Eucalyptus does not report a ""creationDate"" for images. I verified this by observing the HTTP response from Eucalyptus. Also in the command `euca-describe-images` there is no mentioning of a creation Date, see [latest docs](http://docs.hpcloud.com/eucalyptus/4.3.0/#euca2ools-guide/euca-describe-images.html). ##### STEPS TO REPRODUCE Just run `ec2_ami_find` against a Eucalyptus EC2 endpoint. ``` - hosts: all connection: local gather_facts: false tasks: - name: Find images ec2_ami_find: ec2_url: '{{ ec2_url }}' # use Eucalyptus endpoint aws_access_key: '{{ aws_access_key }}' aws_secret_key: '{{ aws_secret_key }}' name: some-name* register: images - debug: msg: '{{ images }}' ``` ##### EXPECTED RESULTS Successful run of `ec2_ami_find`, printed list of images ##### ACTUAL RESULTS Traceback of failure. ``` Traceback (most recent call last): File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 423, in main() File ""/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py"", line 377, in main 'creationDate': image.creationDate, AttributeError: 'Image' object has no attribute 'creationDate' ``` ",1, ami find fails due missing creationdate when run against a helion eucalyptus cloud issue type bug report component name ami find ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables in ansible cfg extensions ext with os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary the ami find module fails when run against a helion eucalyptus cloud with error on missing creationdate traceback most recent call last file tmp ansible ansible module ami find py line in main file tmp ansible ansible module ami find py line in main creationdate image creationdate attributeerror image object has no attribute creationdate reason is that the helion eucalyptus does not report a creationdate for images i verified this by observing the http response from eucalyptus also in the command euca describe images there is no mentioning of a creation date see steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used just run ami find against a eucalyptus endpoint hosts all connection local gather facts false tasks name find images ami find url url use eucalyptus endpoint aws access key aws access key aws secret key aws secret key name some name register images debug msg images expected results successful run of ami find printed list of images actual results traceback of failure traceback most recent call last file tmp ansible ansible module ami find py line in main file tmp ansible ansible module ami find py line in main creationdate image creationdate attributeerror image object has no attribute creationdate ,1 1814,6577317634.0,IssuesEvent,2017-09-12 00:04:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Modifying ec2 security groups in ansible playbook doesn't work for running instances,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### OS / ENVIRONMENT Ubuntu on Windows 10 ##### SUMMARY Changing the group_id value for a running instance has no effect on it's security groups ##### STEPS TO REPRODUCE Run playbook that creates ec2 instance Add extra group_id to playbook Run playbook again ##### EXPECTED RESULTS ec2 instance is in both security groups ##### ACTUAL RESULTS security group of instance is not changed. ",True,"Modifying ec2 security groups in ansible playbook doesn't work for running instances - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### OS / ENVIRONMENT Ubuntu on Windows 10 ##### SUMMARY Changing the group_id value for a running instance has no effect on it's security groups ##### STEPS TO REPRODUCE Run playbook that creates ec2 instance Add extra group_id to playbook Run playbook again ##### EXPECTED RESULTS ec2 instance is in both security groups ##### ACTUAL RESULTS security group of instance is not changed. ",1,modifying security groups in ansible playbook doesn t work for running instances issue type bug report component name module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment ubuntu on windows summary changing the group id value for a running instance has no effect on it s security groups steps to reproduce run playbook that creates instance add extra group id to playbook run playbook again expected results instance is in both security groups actual results security group of instance is not changed ,1 1126,4997037971.0,IssuesEvent,2016-12-09 15:40:28,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git - in Ansible 2.1 switching a shallow clone from tag to branch fails,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Vanilla ##### OS / ENVIRONMENT Happens on both OS X and Ubuntu with different versions of git (1.8 and 1.9) ##### SUMMARY Updating a shallow clone based on a tag to one based on a branch fails ##### STEPS TO REPRODUCE Run this with `ansible-playbook -i 127.0.0.1, ~/tmp/test.yml; rm /tmp/testclone` ``` - hosts: 127.0.0.1 connection: local gather_facts: no tasks: - name: initial clone from tag git: repo: git@github.com:adamchainz/nose-randomly.git dest: /tmp/testclone version: 1.2.0 depth: 1 - name: update to branch git: repo: git@github.com:adamchainz/nose-randomly.git dest: /tmp/testclone version: master depth: 1 ``` ##### EXPECTED RESULTS On Ansible 2.0.2.0, it works fine: ``` PLAY [127.0.0.1] *************************************************************** TASK [initial clone from tag] ************************************************** changed: [127.0.0.1] TASK [update to branch] ******************************************************** changed: [127.0.0.1] PLAY RECAP ********************************************************************* 127.0.0.1 : ok=2 changed=2 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS On Ansible 2.1.0.0, I get: ``` PLAY [127.0.0.1] *************************************************************** TASK [initial clone from tag] ************************************************** changed: [127.0.0.1] TASK [update to branch] ******************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD' fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 877, in \n main()\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 832, in main\n fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec)\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 530, in fetch\n currenthead = get_head_branch(git_path, module, dest, remote)\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 498, in get_head_branch\n f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))\nIOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/Users/adamj/tmp/test.retry PLAY RECAP ********************************************************************* 127.0.0.1 : ok=1 changed=1 unreachable=0 failed=1 ``` It looks like `fetch` has started assuming that if we're updating to a remote branch, we must currently be on a branch, as per lines 530-532: ``` py elif is_remote_branch(git_path, module, dest, repo, version): currenthead = get_head_branch(git_path, module, dest, remote) if currenthead != version: ``` or that `get_head_branch` should work, but doesn't, when we're on a tag. N.B. this is the contents of the `.git` directory after the clone from tag: ``` $ tree /tmp/testclone/.git/refs /tmp/testclone/.git/refs ├── heads └── tags 2 directories, 0 files ``` ",True,"git - in Ansible 2.1 switching a shallow clone from tag to branch fails - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Vanilla ##### OS / ENVIRONMENT Happens on both OS X and Ubuntu with different versions of git (1.8 and 1.9) ##### SUMMARY Updating a shallow clone based on a tag to one based on a branch fails ##### STEPS TO REPRODUCE Run this with `ansible-playbook -i 127.0.0.1, ~/tmp/test.yml; rm /tmp/testclone` ``` - hosts: 127.0.0.1 connection: local gather_facts: no tasks: - name: initial clone from tag git: repo: git@github.com:adamchainz/nose-randomly.git dest: /tmp/testclone version: 1.2.0 depth: 1 - name: update to branch git: repo: git@github.com:adamchainz/nose-randomly.git dest: /tmp/testclone version: master depth: 1 ``` ##### EXPECTED RESULTS On Ansible 2.0.2.0, it works fine: ``` PLAY [127.0.0.1] *************************************************************** TASK [initial clone from tag] ************************************************** changed: [127.0.0.1] TASK [update to branch] ******************************************************** changed: [127.0.0.1] PLAY RECAP ********************************************************************* 127.0.0.1 : ok=2 changed=2 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS On Ansible 2.1.0.0, I get: ``` PLAY [127.0.0.1] *************************************************************** TASK [initial clone from tag] ************************************************** changed: [127.0.0.1] TASK [update to branch] ******************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD' fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 877, in \n main()\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 832, in main\n fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec)\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 530, in fetch\n currenthead = get_head_branch(git_path, module, dest, remote)\n File \""/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\"", line 498, in get_head_branch\n f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))\nIOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/Users/adamj/tmp/test.retry PLAY RECAP ********************************************************************* 127.0.0.1 : ok=1 changed=1 unreachable=0 failed=1 ``` It looks like `fetch` has started assuming that if we're updating to a remote branch, we must currently be on a branch, as per lines 530-532: ``` py elif is_remote_branch(git_path, module, dest, repo, version): currenthead = get_head_branch(git_path, module, dest, remote) if currenthead != version: ``` or that `get_head_branch` should work, but doesn't, when we're on a tag. N.B. this is the contents of the `.git` directory after the clone from tag: ``` $ tree /tmp/testclone/.git/refs /tmp/testclone/.git/refs ├── heads └── tags 2 directories, 0 files ``` ",1,git in ansible switching a shallow clone from tag to branch fails issue type bug report component name git module ansible version ansible config file configured module search path default w o overrides configuration vanilla os environment happens on both os x and ubuntu with different versions of git and summary updating a shallow clone based on a tag to one based on a branch fails steps to reproduce run this with ansible playbook i tmp test yml rm tmp testclone hosts connection local gather facts no tasks name initial clone from tag git repo git github com adamchainz nose randomly git dest tmp testclone version depth name update to branch git repo git github com adamchainz nose randomly git dest tmp testclone version master depth expected results on ansible it works fine play task changed task changed play recap ok changed unreachable failed actual results on ansible i get play task changed task an exception occurred during task execution to see the full traceback use vvv the error was ioerror no such file or directory tmp testclone git refs remotes origin head fatal failed changed false failed true module stderr traceback most recent call last n file var folders z t ansible ansible module git py line in n main n file var folders z t ansible ansible module git py line in main n fetch git path module repo dest version remote depth bare refspec n file var folders z t ansible ansible module git py line in fetch n currenthead get head branch git path module dest remote n file var folders z t ansible ansible module git py line in get head branch n f open os path join repo path refs remotes remote head nioerror no such file or directory tmp testclone git refs remotes origin head n module stdout msg module failure parsed false no more hosts left to retry use limit users adamj tmp test retry play recap ok changed unreachable failed it looks like fetch has started assuming that if we re updating to a remote branch we must currently be on a branch as per lines py elif is remote branch git path module dest repo version currenthead get head branch git path module dest remote if currenthead version or that get head branch should work but doesn t when we re on a tag n b this is the contents of the git directory after the clone from tag tree tmp testclone git refs tmp testclone git refs ├── heads └── tags directories files ,1 1052,4863765703.0,IssuesEvent,2016-11-14 16:14:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,template src does not work for roles,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Module template ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION ``` [defaults] inventory = ./hosts.ini library = ./library forks = 50 gathering = smart roles_path = ./roles vault_password_file = /xxx/ansible_vault_password.txt fact_caching = jsonfile fact_caching_connection = /var/cache/ansible-facts fact_caching_timeout = 86400 var_compression_level = 5 module_compression = 'ZIP_DEFLATED' [privilege_escalation] [paramiko_connection] [ssh_connection] pipelining = True [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT Ubuntu 14.04.5 LTS ##### SUMMARY The module `template` does not search in the `files` directory of a role as it is done by the `copy` module. ##### STEPS TO REPRODUCE I have a file layout as the best practice documentation recommends: site.yml roles/facts/vars/main.yml roles/facts/tasks/main.yml roles/facts/tasks/facts_file.yml roles/facts/files/bash.j2 My `site.yml` includes the role `facts` and `tasks/main.yml` includes `facts_file.yml`. In `facts_file.yml` I use the template module to transfer files to a remote system with the following `src` attribute in a loop: src: ""{{item.shell}}.j2"" which expands to src: bash.j2 But when I run my `site.yml` I get the error: > IOError: [Errno 2] No such file or directory: u'/home/ziemann/ansible/bash.j2' The file is searched in Ansible's base directory but not in the `files` directory of the role. ##### EXPECTED RESULTS I expect the template module to act on the `src` attribute in the same way as the `copy` module. ##### ACTUAL RESULTS The template module and copy module work in different ways. ##### BTW It seems to me that it is a design error, that there are two different copy modules: one with template expansion and one without. It might be better to merge them together. ##### Workaround? How can I build the path to the file based on the role by myself?",True,"template src does not work for roles - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Module template ##### ANSIBLE VERSION ``` ansible 2.1.1.0 ``` ##### CONFIGURATION ``` [defaults] inventory = ./hosts.ini library = ./library forks = 50 gathering = smart roles_path = ./roles vault_password_file = /xxx/ansible_vault_password.txt fact_caching = jsonfile fact_caching_connection = /var/cache/ansible-facts fact_caching_timeout = 86400 var_compression_level = 5 module_compression = 'ZIP_DEFLATED' [privilege_escalation] [paramiko_connection] [ssh_connection] pipelining = True [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT Ubuntu 14.04.5 LTS ##### SUMMARY The module `template` does not search in the `files` directory of a role as it is done by the `copy` module. ##### STEPS TO REPRODUCE I have a file layout as the best practice documentation recommends: site.yml roles/facts/vars/main.yml roles/facts/tasks/main.yml roles/facts/tasks/facts_file.yml roles/facts/files/bash.j2 My `site.yml` includes the role `facts` and `tasks/main.yml` includes `facts_file.yml`. In `facts_file.yml` I use the template module to transfer files to a remote system with the following `src` attribute in a loop: src: ""{{item.shell}}.j2"" which expands to src: bash.j2 But when I run my `site.yml` I get the error: > IOError: [Errno 2] No such file or directory: u'/home/ziemann/ansible/bash.j2' The file is searched in Ansible's base directory but not in the `files` directory of the role. ##### EXPECTED RESULTS I expect the template module to act on the `src` attribute in the same way as the `copy` module. ##### ACTUAL RESULTS The template module and copy module work in different ways. ##### BTW It seems to me that it is a design error, that there are two different copy modules: one with template expansion and one without. It might be better to merge them together. ##### Workaround? How can I build the path to the file based on the role by myself?",1,template src does not work for roles issue type bug report component name module template ansible version ansible configuration inventory hosts ini library library forks gathering smart roles path roles vault password file xxx ansible vault password txt fact caching jsonfile fact caching connection var cache ansible facts fact caching timeout var compression level module compression zip deflated pipelining true os environment ubuntu lts summary the module template does not search in the files directory of a role as it is done by the copy module steps to reproduce i have a file layout as the best practice documentation recommends site yml roles facts vars main yml roles facts tasks main yml roles facts tasks facts file yml roles facts files bash my site yml includes the role facts and tasks main yml includes facts file yml in facts file yml i use the template module to transfer files to a remote system with the following src attribute in a loop src item shell which expands to src bash but when i run my site yml i get the error ioerror no such file or directory u home ziemann ansible bash the file is searched in ansible s base directory but not in the files directory of the role expected results i expect the template module to act on the src attribute in the same way as the copy module actual results the template module and copy module work in different ways btw it seems to me that it is a design error that there are two different copy modules one with template expansion and one without it might be better to merge them together workaround how can i build the path to the file based on the role by myself ,1 980,4745802108.0,IssuesEvent,2016-10-21 08:48:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Issue with aws rds module - returned endpoint address has a bunch of asterisks.,affects_2.0 aws bug_report cloud waiting_on_maintainer,"## Issue Type Bug Report ## Component Name rds ## Ansible Version 2.0.0.2 ## Environment Ansible 2.0.0.2 Ubuntu 14.03 AWS ## Summary I am trying to create an AWS RDS service and then write out the endpoint address to a file, but for some reason once the RDS service is available the endpoint address retrieved from the registered variable has a bunch of asterisks in the name? ### playbook ```` tasks: - name: Launch RDS instances rds: region: ""{{ ec2_region }}"" ec2_access_key: ""{{ ec2_access_key }}"" ec2_secret_key: ""{{ ec2_secret_key }}"" command: create instance_name: oraclelinkstardb db_engine: oracle-se instance_type: db.t2.micro username: linkstar password: xxxxx size: 10 wait: yes wait_timeout: 2000 vpc_security_groups: sg-7ab71302 tags: Environment: dev Application: linkstar with_items: ec2_instances register: rds_result - name: write rds host info to file local_action: module: copy content: ""{{ item.instance.endpoint }}"" dest: ./files/rdsendpoint.txt with_items: ""{{ rds_result.results|default([]) }}"" ```` ### PlayBook Output ```` TASK [write rds host info to file] ********************************************* task path: /home/arlindo/projects/aws/provision.yml:37 ESTABLISH LOCAL CONNECTION FOR USER: arlindo localhost EXEC rc=flag; [ -r ./files/rdsendpoint.txt ] || rc=2; [ -f ./files/rdsendpoint.txt ] || rc=1; [ -d ./files/rdsendpoint.txt ] && rc=3; python -V 2>/dev/null || rc=4; [ x""$rc"" != ""xflag"" ] && echo ""${rc} ""./files/rdsendpoint.txt && exit 0; (python -c 'import hashlib; BLOCKSIZE = 65536; hasher = hashlib.sha1(); afile = open(""'./files/rdsendpoint.txt'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (python -c 'import sha; BLOCKSIZE = 65536; hasher = sha.sha(); afile = open(""'./files/rdsendpoint.txt'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (echo '0 './files/rdsendpoint.txt) localhost EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852 )"" ) localhost PUT /tmp/tmpBne1eN TO /home/arlindo/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852/source localhost EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681 )"" ) localhost PUT /tmp/tmpEMzg0F TO /home/arlindo/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681/copy localhost EXEC LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/arlindo/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681/copy; rm -rf ""/home/arlindo/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681/"" > /dev/null 2>&1 changed: [localhost -> localhost] => (item={'invocation': {'module_name': u'rds', u'module_args': {u'profile': None, u'db_engine': u'oracle-se', u'iops': None, u'publicly_accessible': None, u'ec2_url': None, u'backup_retention': None, u'port': None, u'security_groups': None, u'size': 10, u'aws_secret_key': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'subnet': None, u'vpc_security_groups': [u'sg-7ab71302'], u'upgrade': False, u'zone': None, u'source_instance': None, u'parameter_group': None, u'command': u'create', u'multi_zone': False, u'new_instance_name': None, u'username': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'tags': {u'Environment': u'dev', u'Application': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'}, u'character_set_name': None, u'db_name': None, u'license_model': None, u'ec2_access_key': u'', u'ec2_secret_key': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'apply_immediately': False, u'wait': True, u'aws_access_key': u'', u'security_token': None, u'force_failover': False, u'maint_window': None, u'region': u'us-east-1', u'option_group': None, u'engine_version': None, u'instance_name': u'oracle********db', u'instance_type': u'db.t2.micro', u'password': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'wait_timeout': 2000, u'snapshot': None, u'backup_window': None, u'validate_certs': True}}, u'instance': {u'status': u'available', u'username': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'vpc_security_groups': u'sg-7ab71302', u'endpoint': u'oracle********db.cgxmjxhypelr.us-east-1.rds.amazonaws.com', u'availability_zone': u'us-east-1e', u'port': 1521, u'replication_source': None, u'instance_type': u'db.t2.micro', u'iops': None, u'create_time': 1456801030.478, u'backup_retention': 1, u'multi_zone': False, u'id': u'oracle********db', u'maintenance_window': u'mon:09:02-mon:09:32'}, u'changed': True, '_ansible_no_log': False, 'item': {u'group': [u'web', u'winrdp', u'default'], u'count_tag': {u'Name': u'webwin'}, u'exact_count': 1, u'instance_type': u't2.micro', u'keypair': u'ansible', u'instance_tags': {u'Name': u'webwin'}, u'image': u'ami-42596f28'}}) => {""changed"": true, ""checksum"": ""92761a1042a66cf97c67626d7a307b6e4342f8c4"", ""dest"": ""./files/rdsendpoint.txt"", ""gid"": 1000, ""group"": ""arlindo"", ""invocation"": {""module_args"": {""backup"": false, ""content"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""delimiter"": null, ""dest"": ""./files/rdsendpoint.txt"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""group"": null, ""mode"": null, ""original_basename"": ""tmpBne1eN"", ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/home/arlindo/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852/source"", ""validate"": null}}, ""item"": {""_ansible_no_log"": false, ""changed"": true, ""instance"": {""availability_zone"": ""us-east-1e"", ""backup_retention"": 1, ""create_time"": 1456801030.478, ""endpoint"": ""oracle********db.cgxmjxhypelr.us-east-1.rds.amazonaws.com"", ""id"": ""oracle********db"", ""instance_type"": ""db.t2.micro"", ""iops"": null, ""maintenance_window"": ""mon:09:02-mon:09:32"", ""multi_zone"": false, ""port"": 1521, ""replication_source"": null, ""status"": ""available"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""vpc_security_groups"": ""sg-7ab71302""}, ""invocation"": {""module_args"": {""apply_immediately"": false, ""aws_access_key"": """", ""aws_secret_key"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""backup_retention"": null, ""backup_window"": null, ""character_set_name"": null, ""command"": ""create"", ""db_engine"": ""oracle-se"", ""db_name"": null, ""ec2_access_key"": """", ""ec2_secret_key"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""ec2_url"": null, ""engine_version"": null, ""force_failover"": false, ""instance_name"": ""oracle********db"", ""instance_type"": ""db.t2.micro"", ""iops"": null, ""license_model"": null, ""maint_window"": null, ""multi_zone"": false, ""new_instance_name"": null, ""option_group"": null, ""parameter_group"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""profile"": null, ""publicly_accessible"": null, ""region"": ""us-east-1"", ""security_groups"": null, ""security_token"": null, ""size"": 10, ""snapshot"": null, ""source_instance"": null, ""subnet"": null, ""tags"": {""Application"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""Environment"": ""dev""}, ""upgrade"": false, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true, ""vpc_security_groups"": [""sg-7ab71302""], ""wait"": true, ""wait_timeout"": 2000, ""zone"": null}, ""module_name"": ""rds""}, ""item"": {""count_tag"": {""Name"": ""webwin""}, ""exact_count"": 1, ""group"": [""web"", ""winrdp"", ""default""], ""image"": ""ami-42596f28"", ""instance_tags"": {""Name"": ""webwin""}, ""instance_type"": ""t2.micro"", ""keypair"": ""ansible""}}, ""md5sum"": ""aa41800bfab755d8f1787ea90e7f0233"", ""mode"": ""0664"", ""owner"": ""arlindo"", ""size"": 57, ""src"": ""/home/arlindo/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852/source"", ""state"": ""file"", ""uid"": 1000} ````",True,"Issue with aws rds module - returned endpoint address has a bunch of asterisks. - ## Issue Type Bug Report ## Component Name rds ## Ansible Version 2.0.0.2 ## Environment Ansible 2.0.0.2 Ubuntu 14.03 AWS ## Summary I am trying to create an AWS RDS service and then write out the endpoint address to a file, but for some reason once the RDS service is available the endpoint address retrieved from the registered variable has a bunch of asterisks in the name? ### playbook ```` tasks: - name: Launch RDS instances rds: region: ""{{ ec2_region }}"" ec2_access_key: ""{{ ec2_access_key }}"" ec2_secret_key: ""{{ ec2_secret_key }}"" command: create instance_name: oraclelinkstardb db_engine: oracle-se instance_type: db.t2.micro username: linkstar password: xxxxx size: 10 wait: yes wait_timeout: 2000 vpc_security_groups: sg-7ab71302 tags: Environment: dev Application: linkstar with_items: ec2_instances register: rds_result - name: write rds host info to file local_action: module: copy content: ""{{ item.instance.endpoint }}"" dest: ./files/rdsendpoint.txt with_items: ""{{ rds_result.results|default([]) }}"" ```` ### PlayBook Output ```` TASK [write rds host info to file] ********************************************* task path: /home/arlindo/projects/aws/provision.yml:37 ESTABLISH LOCAL CONNECTION FOR USER: arlindo localhost EXEC rc=flag; [ -r ./files/rdsendpoint.txt ] || rc=2; [ -f ./files/rdsendpoint.txt ] || rc=1; [ -d ./files/rdsendpoint.txt ] && rc=3; python -V 2>/dev/null || rc=4; [ x""$rc"" != ""xflag"" ] && echo ""${rc} ""./files/rdsendpoint.txt && exit 0; (python -c 'import hashlib; BLOCKSIZE = 65536; hasher = hashlib.sha1(); afile = open(""'./files/rdsendpoint.txt'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (python -c 'import sha; BLOCKSIZE = 65536; hasher = sha.sha(); afile = open(""'./files/rdsendpoint.txt'"", ""rb"") buf = afile.read(BLOCKSIZE) while len(buf) > 0: hasher.update(buf) buf = afile.read(BLOCKSIZE) afile.close() print(hasher.hexdigest())' 2>/dev/null) || (echo '0 './files/rdsendpoint.txt) localhost EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852 )"" ) localhost PUT /tmp/tmpBne1eN TO /home/arlindo/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852/source localhost EXEC ( umask 22 && mkdir -p ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681 )"" && echo ""$( echo $HOME/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681 )"" ) localhost PUT /tmp/tmpEMzg0F TO /home/arlindo/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681/copy localhost EXEC LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/arlindo/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681/copy; rm -rf ""/home/arlindo/.ansible/tmp/ansible-tmp-1456801607.24-41001705013681/"" > /dev/null 2>&1 changed: [localhost -> localhost] => (item={'invocation': {'module_name': u'rds', u'module_args': {u'profile': None, u'db_engine': u'oracle-se', u'iops': None, u'publicly_accessible': None, u'ec2_url': None, u'backup_retention': None, u'port': None, u'security_groups': None, u'size': 10, u'aws_secret_key': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'subnet': None, u'vpc_security_groups': [u'sg-7ab71302'], u'upgrade': False, u'zone': None, u'source_instance': None, u'parameter_group': None, u'command': u'create', u'multi_zone': False, u'new_instance_name': None, u'username': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'tags': {u'Environment': u'dev', u'Application': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'}, u'character_set_name': None, u'db_name': None, u'license_model': None, u'ec2_access_key': u'', u'ec2_secret_key': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'apply_immediately': False, u'wait': True, u'aws_access_key': u'', u'security_token': None, u'force_failover': False, u'maint_window': None, u'region': u'us-east-1', u'option_group': None, u'engine_version': None, u'instance_name': u'oracle********db', u'instance_type': u'db.t2.micro', u'password': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'wait_timeout': 2000, u'snapshot': None, u'backup_window': None, u'validate_certs': True}}, u'instance': {u'status': u'available', u'username': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'vpc_security_groups': u'sg-7ab71302', u'endpoint': u'oracle********db.cgxmjxhypelr.us-east-1.rds.amazonaws.com', u'availability_zone': u'us-east-1e', u'port': 1521, u'replication_source': None, u'instance_type': u'db.t2.micro', u'iops': None, u'create_time': 1456801030.478, u'backup_retention': 1, u'multi_zone': False, u'id': u'oracle********db', u'maintenance_window': u'mon:09:02-mon:09:32'}, u'changed': True, '_ansible_no_log': False, 'item': {u'group': [u'web', u'winrdp', u'default'], u'count_tag': {u'Name': u'webwin'}, u'exact_count': 1, u'instance_type': u't2.micro', u'keypair': u'ansible', u'instance_tags': {u'Name': u'webwin'}, u'image': u'ami-42596f28'}}) => {""changed"": true, ""checksum"": ""92761a1042a66cf97c67626d7a307b6e4342f8c4"", ""dest"": ""./files/rdsendpoint.txt"", ""gid"": 1000, ""group"": ""arlindo"", ""invocation"": {""module_args"": {""backup"": false, ""content"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""delimiter"": null, ""dest"": ""./files/rdsendpoint.txt"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""group"": null, ""mode"": null, ""original_basename"": ""tmpBne1eN"", ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": ""/home/arlindo/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852/source"", ""validate"": null}}, ""item"": {""_ansible_no_log"": false, ""changed"": true, ""instance"": {""availability_zone"": ""us-east-1e"", ""backup_retention"": 1, ""create_time"": 1456801030.478, ""endpoint"": ""oracle********db.cgxmjxhypelr.us-east-1.rds.amazonaws.com"", ""id"": ""oracle********db"", ""instance_type"": ""db.t2.micro"", ""iops"": null, ""maintenance_window"": ""mon:09:02-mon:09:32"", ""multi_zone"": false, ""port"": 1521, ""replication_source"": null, ""status"": ""available"", ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""vpc_security_groups"": ""sg-7ab71302""}, ""invocation"": {""module_args"": {""apply_immediately"": false, ""aws_access_key"": """", ""aws_secret_key"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""backup_retention"": null, ""backup_window"": null, ""character_set_name"": null, ""command"": ""create"", ""db_engine"": ""oracle-se"", ""db_name"": null, ""ec2_access_key"": """", ""ec2_secret_key"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""ec2_url"": null, ""engine_version"": null, ""force_failover"": false, ""instance_name"": ""oracle********db"", ""instance_type"": ""db.t2.micro"", ""iops"": null, ""license_model"": null, ""maint_window"": null, ""multi_zone"": false, ""new_instance_name"": null, ""option_group"": null, ""parameter_group"": null, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""port"": null, ""profile"": null, ""publicly_accessible"": null, ""region"": ""us-east-1"", ""security_groups"": null, ""security_token"": null, ""size"": 10, ""snapshot"": null, ""source_instance"": null, ""subnet"": null, ""tags"": {""Application"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""Environment"": ""dev""}, ""upgrade"": false, ""username"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""validate_certs"": true, ""vpc_security_groups"": [""sg-7ab71302""], ""wait"": true, ""wait_timeout"": 2000, ""zone"": null}, ""module_name"": ""rds""}, ""item"": {""count_tag"": {""Name"": ""webwin""}, ""exact_count"": 1, ""group"": [""web"", ""winrdp"", ""default""], ""image"": ""ami-42596f28"", ""instance_tags"": {""Name"": ""webwin""}, ""instance_type"": ""t2.micro"", ""keypair"": ""ansible""}}, ""md5sum"": ""aa41800bfab755d8f1787ea90e7f0233"", ""mode"": ""0664"", ""owner"": ""arlindo"", ""size"": 57, ""src"": ""/home/arlindo/.ansible/tmp/ansible-tmp-1456801607.23-107237530777852/source"", ""state"": ""file"", ""uid"": 1000} ````",1,issue with aws rds module returned endpoint address has a bunch of asterisks issue type bug report component name rds ansible version environment ansible ubuntu aws summary i am trying to create an aws rds service and then write out the endpoint address to a file but for some reason once the rds service is available the endpoint address retrieved from the registered variable has a bunch of asterisks in the name playbook tasks name launch rds instances rds region region access key access key secret key secret key command create instance name oraclelinkstardb db engine oracle se instance type db micro username linkstar password xxxxx size wait yes wait timeout vpc security groups sg tags environment dev application linkstar with items instances register rds result name write rds host info to file local action module copy content item instance endpoint dest files rdsendpoint txt with items rds result results default playbook output task task path home arlindo projects aws provision yml establish local connection for user arlindo localhost exec rc flag rc rc rc python v dev null rc echo rc files rdsendpoint txt exit python c import hashlib blocksize hasher hashlib afile open files rdsendpoint txt rb buf afile read blocksize while len buf hasher update buf buf afile read blocksize afile close print hasher hexdigest dev null python c import sha blocksize hasher sha sha afile open files rdsendpoint txt rb buf afile read blocksize while len buf hasher update buf buf afile read blocksize afile close print hasher hexdigest dev null echo files rdsendpoint txt localhost exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put tmp to home arlindo ansible tmp ansible tmp source localhost exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put tmp to home arlindo ansible tmp ansible tmp copy localhost exec lang en ca utf lc all en ca utf lc messages en ca utf usr bin python home arlindo ansible tmp ansible tmp copy rm rf home arlindo ansible tmp ansible tmp dev null changed item invocation module name u rds u module args u profile none u db engine u oracle se u iops none u publicly accessible none u url none u backup retention none u port none u security groups none u size u aws secret key u value specified in no log parameter u subnet none u vpc security groups u upgrade false u zone none u source instance none u parameter group none u command u create u multi zone false u new instance name none u username u value specified in no log parameter u tags u environment u dev u application u value specified in no log parameter u character set name none u db name none u license model none u access key u u secret key u value specified in no log parameter u apply immediately false u wait true u aws access key u u security token none u force failover false u maint window none u region u us east u option group none u engine version none u instance name u oracle db u instance type u db micro u password u value specified in no log parameter u wait timeout u snapshot none u backup window none u validate certs true u instance u status u available u username u value specified in no log parameter u vpc security groups u sg u endpoint u oracle db cgxmjxhypelr us east rds amazonaws com u availability zone u us east u port u replication source none u instance type u db micro u iops none u create time u backup retention u multi zone false u id u oracle db u maintenance window u mon mon u changed true ansible no log false item u group u count tag u name u webwin u exact count u instance type u micro u keypair u ansible u instance tags u name u webwin u image u ami changed true checksum dest files rdsendpoint txt gid group arlindo invocation module args backup false content value specified in no log parameter delimiter null dest files rdsendpoint txt directory mode null follow false force true group null mode null original basename owner null regexp null remote src null selevel null serole null setype null seuser null src home arlindo ansible tmp ansible tmp source validate null item ansible no log false changed true instance availability zone us east backup retention create time endpoint oracle db cgxmjxhypelr us east rds amazonaws com id oracle db instance type db micro iops null maintenance window mon mon multi zone false port replication source null status available username value specified in no log parameter vpc security groups sg invocation module args apply immediately false aws access key aws secret key value specified in no log parameter backup retention null backup window null character set name null command create db engine oracle se db name null access key secret key value specified in no log parameter url null engine version null force failover false instance name oracle db instance type db micro iops null license model null maint window null multi zone false new instance name null option group null parameter group null password value specified in no log parameter port null profile null publicly accessible null region us east security groups null security token null size snapshot null source instance null subnet null tags application value specified in no log parameter environment dev upgrade false username value specified in no log parameter validate certs true vpc security groups wait true wait timeout zone null module name rds item count tag name webwin exact count group image ami instance tags name webwin instance type micro keypair ansible mode owner arlindo size src home arlindo ansible tmp ansible tmp source state file uid ,1 1855,6577401795.0,IssuesEvent,2017-09-12 00:39:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Feature request: Support instance protection settings for EC2 Auto Scaling Groups in ec2_asg,affects_2.0 aws cloud feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_asg module ##### ANSIBLE VERSION > 2.0 ##### SUMMARY Would like the ec2_asg module to support instance protection settings. https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling ",True,"Feature request: Support instance protection settings for EC2 Auto Scaling Groups in ec2_asg - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_asg module ##### ANSIBLE VERSION > 2.0 ##### SUMMARY Would like the ec2_asg module to support instance protection settings. https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling ",1,feature request support instance protection settings for auto scaling groups in asg issue type feature idea component name asg module ansible version summary would like the asg module to support instance protection settings ,1 912,4581949373.0,IssuesEvent,2016-09-19 08:24:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,eos_config match:none has become idempotent - improve docs,affects_2.2 bug_report networking P2 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_config And possibly other `_config` modules ##### ANSIBLE VERSION ``` ansible 2.2.0 (eos_cmd_v_item 15cf123420) last updated 2016/09/13 12:04:55 (GMT +100) lib/ansible/modules/core: (devel ae6992bf8c) last updated 2016/09/13 09:19:01 (GMT +100) lib/ansible/modules/extras: (devel 1f6f3b72db) last updated 2016/09/13 09:19:10 (GMT +100) ``` ##### SUMMARY This is most likely going to be a documentation issue rather than a code issue. In `eos_template` when `force: true` was used we to always report as changed. From the documentation for eos_template >The force argument instructs the module to not consider the current devices running-config. When set to true, this will cause the module to push the contents of src into the device without first checking if already configured. For `eos_config` (and possibly all `_config` modules) it appears that force does an idempotent check. Which you could strongly argue is the ""right thing"", however in the `eos_config` documentation we state: >The force argument instructs the module to not consider the current devices running-config. When set to true, this will cause the module to push the contents of src into the device without first checking if already configured. Note this argument should be considered deprecated. To achieve the equivalient, set the match argument to none. This argument will be removed in a future release. Note using `match: none` and `force: true` with `eos_config` both do the same thing, i.e. are idempotent ##### STEPS TO REPRODUCE ``` - name: configure device with config eos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - name: configure device with config eos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: - ""result.changed == true"" ``` ##### EXPECTED RESULTS Documentation and implementation to match ##### ACTUAL RESULTS ",True,"eos_config match:none has become idempotent - improve docs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_config And possibly other `_config` modules ##### ANSIBLE VERSION ``` ansible 2.2.0 (eos_cmd_v_item 15cf123420) last updated 2016/09/13 12:04:55 (GMT +100) lib/ansible/modules/core: (devel ae6992bf8c) last updated 2016/09/13 09:19:01 (GMT +100) lib/ansible/modules/extras: (devel 1f6f3b72db) last updated 2016/09/13 09:19:10 (GMT +100) ``` ##### SUMMARY This is most likely going to be a documentation issue rather than a code issue. In `eos_template` when `force: true` was used we to always report as changed. From the documentation for eos_template >The force argument instructs the module to not consider the current devices running-config. When set to true, this will cause the module to push the contents of src into the device without first checking if already configured. For `eos_config` (and possibly all `_config` modules) it appears that force does an idempotent check. Which you could strongly argue is the ""right thing"", however in the `eos_config` documentation we state: >The force argument instructs the module to not consider the current devices running-config. When set to true, this will cause the module to push the contents of src into the device without first checking if already configured. Note this argument should be considered deprecated. To achieve the equivalient, set the match argument to none. This argument will be removed in a future release. Note using `match: none` and `force: true` with `eos_config` both do the same thing, i.e. are idempotent ##### STEPS TO REPRODUCE ``` - name: configure device with config eos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - name: configure device with config eos_config: src: basic/config.j2 provider: ""{{ cli }}"" match: none register: result - assert: that: - ""result.changed == true"" ``` ##### EXPECTED RESULTS Documentation and implementation to match ##### ACTUAL RESULTS ",1,eos config match none has become idempotent improve docs issue type bug report component name eos config and possibly other config modules ansible version ansible eos cmd v item last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt summary this is most likely going to be a documentation issue rather than a code issue in eos template when force true was used we to always report as changed from the documentation for eos template the force argument instructs the module to not consider the current devices running config when set to true this will cause the module to push the contents of src into the device without first checking if already configured for eos config and possibly all config modules it appears that force does an idempotent check which you could strongly argue is the right thing however in the eos config documentation we state the force argument instructs the module to not consider the current devices running config when set to true this will cause the module to push the contents of src into the device without first checking if already configured note this argument should be considered deprecated to achieve the equivalient set the match argument to none this argument will be removed in a future release note using match none and force true with eos config both do the same thing i e are idempotent steps to reproduce name configure device with config eos config src basic config provider cli match none register result name configure device with config eos config src basic config provider cli match none register result assert that result changed true expected results documentation and implementation to match actual results ,1 1916,6577706408.0,IssuesEvent,2017-09-12 02:45:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cloud/docker: recreates stopped named containers with state=started,affects_2.0 bug_report cloud docker waiting_on_maintainer,"##### Issue Type: Bug Report ##### Plugin Name: docker ##### Ansible Version: ``` ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: Docker module recreates stopped named container with state=started ##### Steps To Reproduce: Task description ``` - name: Start container docker: image: debian name: lab pull: missing detach: yes net: bridge tty: yes command: sleep infinity state: started ``` Output from ansible-paybook -vv ``` TASK [start-container : Start container] *************************************** changed: [localhost] => {""ansible_facts"": {""docker_containers"": [{""AppArmorProfile"": """", ""Args"": [""infinity""], ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""sleep"", ""infinity""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""75bb43da9afb"", ""Image"": ""android"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": true, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-02-21T15:57:18.464570878Z"", ""Driver"": ""btrfs"", ""ExecIDs"": null, ""GraphDriver"": {""Data"": null, ""Name"": ""btrfs""}, ""HostConfig"": {""Binds"": null, ""BlkioDeviceReadBps"": null, ""BlkioDeviceReadIOps"": null, ""BlkioDeviceWriteBps"": null, ""BlkioDeviceWriteIOps"": null, ""BlkioWeight"": 0, ""BlkioWeightDevice"": null, ""CapAdd"": null, ""CapDrop"": null, ""CgroupParent"": """", ""ConsoleSize"": [0, 0], ""ContainerIDFile"": """", ""CpuPeriod"": 0, ""CpuQuota"": 0, ""CpuShares"": 0, ""CpusetCpus"": """", ""CpusetMems"": """", ""Devices"": null, ""Dns"": null, ""DnsOptions"": null, ""DnsSearch"": null, ""ExtraHosts"": null, ""GroupAdd"": null, ""IpcMode"": """", ""Isolation"": """", ""KernelMemory"": 0, ""Links"": null, ""LogConfig"": {""Config"": {}, ""Type"": ""journald""}, ""Memory"": 0, ""MemoryReservation"": 0, ""MemorySwap"": 0, ""MemorySwappiness"": -1, ""NetworkMode"": ""bridge"", ""OomKillDisable"": false, ""OomScoreAdj"": 0, ""PidMode"": """", ""PidsLimit"": 0, ""PortBindings"": null, ""Privileged"": false, ""PublishAllPorts"": false, ""ReadonlyRootfs"": false, ""RestartPolicy"": {""MaximumRetryCount"": 0, ""Name"": """"}, ""SecurityOpt"": null, ""ShmSize"": 67108864, ""UTSMode"": """", ""Ulimits"": null, ""VolumeDriver"": """", ""VolumesFrom"": null}, ""HostnamePath"": ""/var/lib/docker/containers/75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436/hostname"", ""HostsPath"": ""/var/lib/docker/containers/75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436/hosts"", ""Id"": ""75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436"", ""Image"": ""sha256:484a6a69ac0c4f06b7bff344a36745414fe57024c07ab3c90d8146b835256008"", ""LogPath"": """", ""MountLabel"": """", ""Mounts"": [], ""Name"": ""/lab"", ""NetworkSettings"": {""Bridge"": """", ""EndpointID"": ""a929802ab259e1992472d2ac0383f82ca82ca19f901f565a6dcc1ccd3072f4ce"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""HairpinMode"": false, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""LinkLocalIPv6Address"": """", ""LinkLocalIPv6PrefixLen"": 0, ""MacAddress"": ""02:42:ac:11:00:02"", ""Networks"": {""bridge"": {""Aliases"": null, ""EndpointID"": ""a929802ab259e1992472d2ac0383f82ca82ca19f901f565a6dcc1ccd3072f4ce"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""IPAMConfig"": null, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""Links"": null, ""MacAddress"": ""02:42:ac:11:00:02"", ""NetworkID"": ""6a6c469a29d9f7d80a25d6df2e158d1f4c4ead81a463fe4ac13b20a60280ce6d""}}, ""Ports"": {}, ""SandboxID"": ""6ccf812bac48ad68b87808dcc05dad84cf41054a21d03f48e2eb1dcba45414a7"", ""SandboxKey"": ""/var/run/docker/netns/6ccf812bac48"", ""SecondaryIPAddresses"": null, ""SecondaryIPv6Addresses"": null}, ""Path"": ""sleep"", ""ProcessLabel"": """", ""ResolvConfPath"": ""/var/lib/docker/containers/75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436/resolv.conf"", ""RestartCount"": 0, ""State"": {""Dead"": false, ""Error"": """", ""ExitCode"": 0, ""FinishedAt"": ""0001-01-01T00:00:00Z"", ""OOMKilled"": false, ""Paused"": false, ""Pid"": 16620, ""Restarting"": false, ""Running"": true, ""StartedAt"": ""2016-02-21T15:57:18.649681499Z"", ""Status"": ""running""}}]}, ""changed"": true, ""msg"": ""removed 1 container, started 1 container, created 1 container."", ""reload_reasons"": null, ""summary"": {""created"": 1, ""killed"": 0, ""pulled"": 0, ""removed"": 1, ""restarted"": 0, ""started"": 1, ""stopped"": 0}} ``` ##### Expected Results: Named container restarts with last saved state ##### Actual Results: Named container was recreated from image Looks like the problem caused by this code: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker.py#L1690 ",True,"cloud/docker: recreates stopped named containers with state=started - ##### Issue Type: Bug Report ##### Plugin Name: docker ##### Ansible Version: ``` ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: Docker module recreates stopped named container with state=started ##### Steps To Reproduce: Task description ``` - name: Start container docker: image: debian name: lab pull: missing detach: yes net: bridge tty: yes command: sleep infinity state: started ``` Output from ansible-paybook -vv ``` TASK [start-container : Start container] *************************************** changed: [localhost] => {""ansible_facts"": {""docker_containers"": [{""AppArmorProfile"": """", ""Args"": [""infinity""], ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""sleep"", ""infinity""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""], ""Hostname"": ""75bb43da9afb"", ""Image"": ""android"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": true, ""User"": """", ""Volumes"": null, ""WorkingDir"": """"}, ""Created"": ""2016-02-21T15:57:18.464570878Z"", ""Driver"": ""btrfs"", ""ExecIDs"": null, ""GraphDriver"": {""Data"": null, ""Name"": ""btrfs""}, ""HostConfig"": {""Binds"": null, ""BlkioDeviceReadBps"": null, ""BlkioDeviceReadIOps"": null, ""BlkioDeviceWriteBps"": null, ""BlkioDeviceWriteIOps"": null, ""BlkioWeight"": 0, ""BlkioWeightDevice"": null, ""CapAdd"": null, ""CapDrop"": null, ""CgroupParent"": """", ""ConsoleSize"": [0, 0], ""ContainerIDFile"": """", ""CpuPeriod"": 0, ""CpuQuota"": 0, ""CpuShares"": 0, ""CpusetCpus"": """", ""CpusetMems"": """", ""Devices"": null, ""Dns"": null, ""DnsOptions"": null, ""DnsSearch"": null, ""ExtraHosts"": null, ""GroupAdd"": null, ""IpcMode"": """", ""Isolation"": """", ""KernelMemory"": 0, ""Links"": null, ""LogConfig"": {""Config"": {}, ""Type"": ""journald""}, ""Memory"": 0, ""MemoryReservation"": 0, ""MemorySwap"": 0, ""MemorySwappiness"": -1, ""NetworkMode"": ""bridge"", ""OomKillDisable"": false, ""OomScoreAdj"": 0, ""PidMode"": """", ""PidsLimit"": 0, ""PortBindings"": null, ""Privileged"": false, ""PublishAllPorts"": false, ""ReadonlyRootfs"": false, ""RestartPolicy"": {""MaximumRetryCount"": 0, ""Name"": """"}, ""SecurityOpt"": null, ""ShmSize"": 67108864, ""UTSMode"": """", ""Ulimits"": null, ""VolumeDriver"": """", ""VolumesFrom"": null}, ""HostnamePath"": ""/var/lib/docker/containers/75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436/hostname"", ""HostsPath"": ""/var/lib/docker/containers/75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436/hosts"", ""Id"": ""75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436"", ""Image"": ""sha256:484a6a69ac0c4f06b7bff344a36745414fe57024c07ab3c90d8146b835256008"", ""LogPath"": """", ""MountLabel"": """", ""Mounts"": [], ""Name"": ""/lab"", ""NetworkSettings"": {""Bridge"": """", ""EndpointID"": ""a929802ab259e1992472d2ac0383f82ca82ca19f901f565a6dcc1ccd3072f4ce"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""HairpinMode"": false, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""LinkLocalIPv6Address"": """", ""LinkLocalIPv6PrefixLen"": 0, ""MacAddress"": ""02:42:ac:11:00:02"", ""Networks"": {""bridge"": {""Aliases"": null, ""EndpointID"": ""a929802ab259e1992472d2ac0383f82ca82ca19f901f565a6dcc1ccd3072f4ce"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""IPAMConfig"": null, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""Links"": null, ""MacAddress"": ""02:42:ac:11:00:02"", ""NetworkID"": ""6a6c469a29d9f7d80a25d6df2e158d1f4c4ead81a463fe4ac13b20a60280ce6d""}}, ""Ports"": {}, ""SandboxID"": ""6ccf812bac48ad68b87808dcc05dad84cf41054a21d03f48e2eb1dcba45414a7"", ""SandboxKey"": ""/var/run/docker/netns/6ccf812bac48"", ""SecondaryIPAddresses"": null, ""SecondaryIPv6Addresses"": null}, ""Path"": ""sleep"", ""ProcessLabel"": """", ""ResolvConfPath"": ""/var/lib/docker/containers/75bb43da9afb967ee60c095f3dd5f03183f80b3973ccb75d62d08678f0de6436/resolv.conf"", ""RestartCount"": 0, ""State"": {""Dead"": false, ""Error"": """", ""ExitCode"": 0, ""FinishedAt"": ""0001-01-01T00:00:00Z"", ""OOMKilled"": false, ""Paused"": false, ""Pid"": 16620, ""Restarting"": false, ""Running"": true, ""StartedAt"": ""2016-02-21T15:57:18.649681499Z"", ""Status"": ""running""}}]}, ""changed"": true, ""msg"": ""removed 1 container, started 1 container, created 1 container."", ""reload_reasons"": null, ""summary"": {""created"": 1, ""killed"": 0, ""pulled"": 0, ""removed"": 1, ""restarted"": 0, ""started"": 1, ""stopped"": 0}} ``` ##### Expected Results: Named container restarts with last saved state ##### Actual Results: Named container was recreated from image Looks like the problem caused by this code: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker.py#L1690 ",1,cloud docker recreates stopped named containers with state started issue type bug report plugin name docker ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides ansible configuration n a environment n a summary docker module recreates stopped named container with state started steps to reproduce task description name start container docker image debian name lab pull missing detach yes net bridge tty yes command sleep infinity state started output from ansible paybook vv task changed ansible facts docker containers config attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env hostname image android labels onbuild null openstdin false stdinonce false tty true user volumes null workingdir created driver btrfs execids null graphdriver data null name btrfs hostconfig binds null blkiodevicereadbps null blkiodevicereadiops null blkiodevicewritebps null blkiodevicewriteiops null blkioweight blkioweightdevice null capadd null capdrop null cgroupparent consolesize containeridfile cpuperiod cpuquota cpushares cpusetcpus cpusetmems devices null dns null dnsoptions null dnssearch null extrahosts null groupadd null ipcmode isolation kernelmemory links null logconfig config type journald memory memoryreservation memoryswap memoryswappiness networkmode bridge oomkilldisable false oomscoreadj pidmode pidslimit portbindings null privileged false publishallports false readonlyrootfs false restartpolicy maximumretrycount name securityopt null shmsize utsmode ulimits null volumedriver volumesfrom null hostnamepath var lib docker containers hostname hostspath var lib docker containers hosts id image logpath mountlabel mounts name lab networksettings bridge endpointid gateway hairpinmode false ipaddress ipprefixlen macaddress ac networks bridge aliases null endpointid gateway ipamconfig null ipaddress ipprefixlen links null macaddress ac networkid ports sandboxid sandboxkey var run docker netns secondaryipaddresses null null path sleep processlabel resolvconfpath var lib docker containers resolv conf restartcount state dead false error exitcode finishedat oomkilled false paused false pid restarting false running true startedat status running changed true msg removed container started container created container reload reasons null summary created killed pulled removed restarted started stopped expected results named container restarts with last saved state actual results named container was recreated from image looks like the problem caused by this code ,1 1816,6577318165.0,IssuesEvent,2017-09-12 00:04:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ansible ec2_asg module expiring token for AWS with wait_timeout more than 20 mins,affects_2.3 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION N/A ##### SUMMARY I am using Ansible ec2_asg module. I have my credentails file generated with TEMP AWS credentails in `~/.aws/credentials` every few minutes. I have wait timeout more than 15 mins , so in between that time the credentails are expiring and my Ansible playbook fail. That is because anisble reads creds only once during start and then keep using that. If i manually use connect_to_aws at every place in ec2_asg then it works fine. Is there any easy fix for that ",True,"Ansible ec2_asg module expiring token for AWS with wait_timeout more than 20 mins - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION N/A ##### SUMMARY I am using Ansible ec2_asg module. I have my credentails file generated with TEMP AWS credentails in `~/.aws/credentials` every few minutes. I have wait timeout more than 15 mins , so in between that time the credentails are expiring and my Ansible playbook fail. That is because anisble reads creds only once during start and then keep using that. If i manually use connect_to_aws at every place in ec2_asg then it works fine. Is there any easy fix for that ",1,ansible asg module expiring token for aws with wait timeout more than mins issue type bug report component name asg ansible version n a summary i am using ansible asg module i have my credentails file generated with temp aws credentails in aws credentials every few minutes i have wait timeout more than mins so in between that time the credentails are expiring and my ansible playbook fail that is because anisble reads creds only once during start and then keep using that if i manually use connect to aws at every place in asg then it works fine is there any easy fix for that ,1 1917,6577706608.0,IssuesEvent,2017-09-12 02:45:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,s3 - manage objects in S3 - Problem in coping ,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: Please pick one and delete the rest: - Bug Report ##### Plugin Name: s3 - manage objects in S3 ##### Ansible Version: ``` 2.0.0.2 ``` ##### Ansible Configuration: Please mention any settings you've changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ##### Environment: N/A ##### Summary: We are using S3 module in our playbook to copy files from S3. Recently we added versioning to our Bucket and after a while we decided to suspend it. (Note that after enabling versioning you can't disable it anymore but just to suspend it). Unfortunately after disabling this option there is no option to cp a file using S3 module. I also tried to add ""version=null"" as shown in my bucket but still, action is failed. Can you please provide of a workaround for this case. ##### Steps To Reproduce: Task in ansible playbook : ``` s3: bucket=bla.bla.com object=/jobs/systems_envphp/test.php dest={{ APP_DIR }}/env.php mode=get overwrite=different ``` - Change source bucket to versioning. - Move bucket to suspend versioning. - Try to run the task again. ##### Expected Results: File will be copied same as before (note that current file version in s3 null). File is well copied using aws cli tools but not using ansible. ##### Actual Results: Recieving an error and task is failed. ``` fatal: [X.X.X.X]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\"", line 2823, in \r\n main()\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\"", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\"", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1482, in get_file\r\n query_args=None)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n\r\nAccessDeniedAccess Denied0D9B24E960DEAAC40YODxs2JmkQchhruCaN1zs6etW35sv91lJ9F9T/6R/fpyES6883QAwCyrHYfrbpGn+vmMIUnRKA=\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} fatal: [X.X.X.X]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\"", line 2823, in \r\n main()\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\"", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\"", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1482, in get_file\r\n query_args=None)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n\r\nAccessDeniedAccess DeniedEF295D957B42B22FajmqH4MRXArOysKrGB+Ya72krnNBxEWuyzi1JUO6ZLvYMD2E+mauFJGFwnKkYWQHMCGEB4mIgfQ=\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"s3 - manage objects in S3 - Problem in coping - ##### Issue Type: Please pick one and delete the rest: - Bug Report ##### Plugin Name: s3 - manage objects in S3 ##### Ansible Version: ``` 2.0.0.2 ``` ##### Ansible Configuration: Please mention any settings you've changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ##### Environment: N/A ##### Summary: We are using S3 module in our playbook to copy files from S3. Recently we added versioning to our Bucket and after a while we decided to suspend it. (Note that after enabling versioning you can't disable it anymore but just to suspend it). Unfortunately after disabling this option there is no option to cp a file using S3 module. I also tried to add ""version=null"" as shown in my bucket but still, action is failed. Can you please provide of a workaround for this case. ##### Steps To Reproduce: Task in ansible playbook : ``` s3: bucket=bla.bla.com object=/jobs/systems_envphp/test.php dest={{ APP_DIR }}/env.php mode=get overwrite=different ``` - Change source bucket to versioning. - Move bucket to suspend versioning. - Try to run the task again. ##### Expected Results: File will be copied same as before (note that current file version in s3 null). File is well copied using aws cli tools but not using ansible. ##### Actual Results: Recieving an error and task is failed. ``` fatal: [X.X.X.X]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\"", line 2823, in \r\n main()\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\"", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\"", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1482, in get_file\r\n query_args=None)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n\r\nAccessDeniedAccess Denied0D9B24E960DEAAC40YODxs2JmkQchhruCaN1zs6etW35sv91lJ9F9T/6R/fpyES6883QAwCyrHYfrbpGn+vmMIUnRKA=\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} fatal: [X.X.X.X]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\"", line 2823, in \r\n main()\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\"", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\"", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1482, in get_file\r\n query_args=None)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\"", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n\r\nAccessDeniedAccess DeniedEF295D957B42B22FajmqH4MRXArOysKrGB+Ya72krnNBxEWuyzi1JUO6ZLvYMD2E+mauFJGFwnKkYWQHMCGEB4mIgfQ=\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1, manage objects in problem in coping issue type please pick one and delete the rest bug report plugin name manage objects in ansible version ansible configuration please mention any settings you ve changed added removed in ansible cfg or using the ansible environment variables environment n a summary we are using module in our playbook to copy files from recently we added versioning to our bucket and after a while we decided to suspend it note that after enabling versioning you can t disable it anymore but just to suspend it unfortunately after disabling this option there is no option to cp a file using module i also tried to add version null as shown in my bucket but still action is failed can you please provide of a workaround for this case steps to reproduce task in ansible playbook bucket bla bla com object jobs systems envphp test php dest app dir env php mode get overwrite different change source bucket to versioning move bucket to suspend versioning try to run the task again expected results file will be copied same as before note that current file version in null file is well copied using aws cli tools but not using ansible actual results recieving an error and task is failed fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file home ubuntu ansible tmp ansible tmp line in r n main r n file home ubuntu ansible tmp ansible tmp line in main r n download module bucket obj dest retries version version r n file home ubuntu ansible tmp ansible tmp line in download r n key get contents to filename dest r n file usr local lib dist packages boto key py line in get contents to filename r n response headers response headers r n file usr local lib dist packages boto key py line in get contents to file r n response headers response headers r n file usr local lib dist packages boto key py line in get file r n query args none r n file usr local lib dist packages boto key py line in get file internal r n override num retries override num retries r n file usr local lib dist packages boto key py line in open r n override num retries override num retries r n file usr local lib dist packages boto key py line in open read r n self resp reason body r nboto exception forbidden r n r n accessdenied access denied vmmiunrka r n msg module failure parsed false fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file home ubuntu ansible tmp ansible tmp line in r n main r n file home ubuntu ansible tmp ansible tmp line in main r n download module bucket obj dest retries version version r n file home ubuntu ansible tmp ansible tmp line in download r n key get contents to filename dest r n file usr local lib dist packages boto key py line in get contents to filename r n response headers response headers r n file usr local lib dist packages boto key py line in get contents to file r n response headers response headers r n file usr local lib dist packages boto key py line in get file r n query args none r n file usr local lib dist packages boto key py line in get file internal r n override num retries override num retries r n file usr local lib dist packages boto key py line in open r n override num retries override num retries r n file usr local lib dist packages boto key py line in open read r n self resp reason body r nboto exception forbidden r n r n accessdenied access denied r n msg module failure parsed false ,1 821,4442718688.0,IssuesEvent,2016-08-19 14:25:15,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mount should not require fstype/src for state=unmounted,bug_report waiting_on_maintainer,"Issue: Bug report Ansible version: devel Ansible configuration: NA Environment: NA Summary: The ``mount`` module requires that ``src`` and ``fstype`` always be specified, even when this information is not needed, as is the case with ``state=unmounted``. Steps to reproduce: - mount: name=/some/mount/path state=unmounted Expected results: Works Actual results: Doesn't",True,"mount should not require fstype/src for state=unmounted - Issue: Bug report Ansible version: devel Ansible configuration: NA Environment: NA Summary: The ``mount`` module requires that ``src`` and ``fstype`` always be specified, even when this information is not needed, as is the case with ``state=unmounted``. Steps to reproduce: - mount: name=/some/mount/path state=unmounted Expected results: Works Actual results: Doesn't",1,mount should not require fstype src for state unmounted issue bug report ansible version devel ansible configuration na environment na summary the mount module requires that src and fstype always be specified even when this information is not needed as is the case with state unmounted steps to reproduce mount name some mount path state unmounted expected results works actual results doesn t,1 1844,6577380037.0,IssuesEvent,2017-09-12 00:30:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_ami_find is_public option appears to have the opposite effect on searches,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami_find ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from: OS X 10.11.4 Managing: N/A ##### SUMMARY ec2_ami_find's `is_public` appears to have the opposite effect on searches. ##### STEPS TO REPRODUCE Run the following playbook. ``` --- - name: Find base ubuntu image hosts: localhost gather_facts: no tasks: - name: Find latest Ubuntu AMI ec2_ami_find: name: ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-*"" owner: ""099720109477"" region: us-west-2 sort: name sort_order: descending is_public: no register: ami - debug: var=ami ``` ##### EXPECTED RESULTS No AMIs should be found. ##### ACTUAL RESULTS Several public AMIs are found. ``` No config file found; using defaults Loaded callback default of type stdout, v2.0 PLAYBOOK: base-image-playbook.yml ********************************************** 1 plays in base-image-playbook.yml PLAY [Find base ubuntu image] ************************************************** TASK [Find latest Ubuntu AMI] ************************************************** task path: /Users/mike/courseload/ansible/aws/base-amis/base-image-playbook.yml:7 ESTABLISH LOCAL CONNECTION FOR USER: mike localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772 `"" )' localhost PUT /var/folders/_r/5z5t3kln6yn8r3jgrbyvbb6r0000gn/T/tmpADAGwX TO /Users/mike/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772/ec2_ami_find localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/mike/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772/ec2_ami_find; rm -rf ""/Users/mike/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772/"" > /dev/null 2>&1' ok: [localhost] => {""changed"": false, ""invocation"": {""module_args"": {""ami_id"": null, ""ami_tags"": null, ""architecture"": null, ""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""hypervisor"": null, ""is_public"": false, ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-*"", ""no_result_action"": ""success"", ""owner"": ""099720109477"", ""platform"": null, ""profile"": null, ""region"": ""us-west-2"", ""security_token"": null, ""sort"": ""name"", ""sort_end"": null, ""sort_order"": ""descending"", ""sort_start"": null, ""sort_tag"": null, ""state"": ""available"", ""validate_certs"": true, ""virtualization_type"": null}, ""module_name"": ""ec2_ami_find""}, ""results"": [{""ami_id"": ""ami-0dc73a6d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df2c1224"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-05-10T11:55:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-3f9f6b5f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-50548f0b"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-04-06T22:01:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-8ba74eeb"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b9bbc2e0"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-03-15T08:30:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-412dcf21"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-22204477"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-02-23T13:52:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-6f22c10f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-275e087e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-02-18T01:36:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-a1c721c1"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d0c42c92"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-02-02T12:11:03.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-401ffa20"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d8efa0f"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-01-20T12:14:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-9cbea4fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5dd4561f"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-01-15T22:10:59.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-47465826"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e5bd36a7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-12-20T01:45:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-28879849"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-733e3421"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-12-18T09:40:13.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e54f5f84"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-43091c01"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-11-18T08:16:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-06a4b367"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d18d8789"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-11-06T23:02:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-96e605a5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f907c8a5"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-20T09:03:36.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-281eff1b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-21fecb62"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-15T20:43:20.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-20d73013"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e08d28a6"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-09T08:47:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e0aa4dd3"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d9da826"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-08T08:57:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-16e30725"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f4006fa4"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-06T09:25:57.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-09f8e239"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9a178cc2"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-01T11:23:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-9bbea5ab"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-367f3474"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-29T11:13:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-1de9f12d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-6414e627"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-25T13:37:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-c53c21f5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b75f10e7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-09T05:58:40.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e91605d9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-96f011c7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-02T05:14:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-bbd5c08b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5a8dfa19"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-15T08:11:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-b5736685"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-45cf0117"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-14T07:14:08.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-539d9763"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d39c40b"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-13T08:49:30.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-c5616bf5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-11898c4d"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-11T11:43:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-bf868e8f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7adde43a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-06T19:28:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-a56d6495"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-53c3440c"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-01T09:04:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-d9f5fae9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d5874884"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-07-28T07:13:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e5b8b4d5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ff17f1bd"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-07-25T05:06:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-975b5da7"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e66090bc"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-30T09:14:05.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-ade1d99d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-48daf60e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-09T17:18:18.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-7b0d354b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-14f47d54"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-09T03:53:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-0f675e3f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df61a59d"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-04T08:13:14.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-65e8d755"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e39793be"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-05-29T10:14:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-d986b7e9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-84189ad9"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-05-07T08:13:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-67526757"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-58822d1a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-04-17T12:05:37.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-595b7069"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f15377ab"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-04-09T08:51:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-7f89a64f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9108fdce"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-03-25T23:55:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-cb1536fb"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c78bb49e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-03-05T11:02:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-fd3818cd"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-fc5fcba9"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-02-28T07:14:48.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-0ba6873b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-30ef3d72"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-02-26T06:02:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-990a2fa9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-03180082"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-02-10T05:22:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-3bebb50b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-2b7385aa"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-01-23T10:56:41.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-978dd9a7"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f7f8df7a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-11-26T01:49:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-cd5311fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c5762a0c"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-30T00:18:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-efd497df"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-a8044961"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-26T16:04:28.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-2799da17"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-eb3a4822"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-25T14:06:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-cf397aff"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7e9cf9b7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-23T14:15:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-2d9add1d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ac55ed5a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-08-29T18:45:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-f1ce8bc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-edd3e31a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-08-16T22:56:58.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-7f6e2b4f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4085d5b7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-08-14T00:49:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-9986fea9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f1898a05"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-07-25T08:28:27.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-f34032c3"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d04a288"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-06-08T00:30:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-f13e4dc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9cdce36e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-05-29T19:07:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-6ac2a85a"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-82da0571"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-04-17T06:17:06.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}]} TASK [debug] ******************************************************************* task path: /Users/mike/courseload/ansible/aws/base-amis/base-image-playbook.yml:17 ok: [localhost] => { ""ami"": { ""changed"": false, ""results"": [ { ""ami_id"": ""ami-0dc73a6d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df2c1224"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-05-10T11:55:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-3f9f6b5f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-50548f0b"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-04-06T22:01:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-8ba74eeb"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b9bbc2e0"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-03-15T08:30:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-412dcf21"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-22204477"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-02-23T13:52:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-6f22c10f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-275e087e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-02-18T01:36:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-a1c721c1"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d0c42c92"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-02-02T12:11:03.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-401ffa20"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d8efa0f"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-01-20T12:14:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-9cbea4fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5dd4561f"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-01-15T22:10:59.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-47465826"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e5bd36a7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-12-20T01:45:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-28879849"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-733e3421"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-12-18T09:40:13.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e54f5f84"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-43091c01"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-11-18T08:16:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-06a4b367"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d18d8789"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-11-06T23:02:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-96e605a5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f907c8a5"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-20T09:03:36.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-281eff1b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-21fecb62"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-15T20:43:20.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-20d73013"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e08d28a6"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-09T08:47:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e0aa4dd3"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d9da826"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-08T08:57:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-16e30725"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f4006fa4"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-06T09:25:57.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-09f8e239"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9a178cc2"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-01T11:23:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-9bbea5ab"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-367f3474"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-29T11:13:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-1de9f12d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-6414e627"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-25T13:37:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-c53c21f5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b75f10e7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-09T05:58:40.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e91605d9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-96f011c7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-02T05:14:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-bbd5c08b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5a8dfa19"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-15T08:11:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-b5736685"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-45cf0117"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-14T07:14:08.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-539d9763"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d39c40b"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-13T08:49:30.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-c5616bf5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-11898c4d"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-11T11:43:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-bf868e8f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7adde43a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-06T19:28:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-a56d6495"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-53c3440c"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-01T09:04:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-d9f5fae9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d5874884"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-07-28T07:13:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e5b8b4d5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ff17f1bd"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-07-25T05:06:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-975b5da7"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e66090bc"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-30T09:14:05.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-ade1d99d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-48daf60e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-09T17:18:18.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-7b0d354b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-14f47d54"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-09T03:53:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-0f675e3f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df61a59d"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-04T08:13:14.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-65e8d755"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e39793be"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-05-29T10:14:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-d986b7e9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-84189ad9"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-05-07T08:13:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-67526757"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-58822d1a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-04-17T12:05:37.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-595b7069"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f15377ab"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-04-09T08:51:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-7f89a64f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9108fdce"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-03-25T23:55:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-cb1536fb"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c78bb49e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-03-05T11:02:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-fd3818cd"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-fc5fcba9"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-02-28T07:14:48.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-0ba6873b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-30ef3d72"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-02-26T06:02:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-990a2fa9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-03180082"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-02-10T05:22:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-3bebb50b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-2b7385aa"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-01-23T10:56:41.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-978dd9a7"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f7f8df7a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-11-26T01:49:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-cd5311fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c5762a0c"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-30T00:18:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-efd497df"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-a8044961"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-26T16:04:28.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-2799da17"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-eb3a4822"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-25T14:06:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-cf397aff"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7e9cf9b7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-23T14:15:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-2d9add1d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ac55ed5a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-08-29T18:45:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-f1ce8bc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-edd3e31a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-08-16T22:56:58.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-7f6e2b4f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4085d5b7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-08-14T00:49:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-9986fea9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f1898a05"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-07-25T08:28:27.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-f34032c3"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d04a288"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-06-08T00:30:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-f13e4dc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9cdce36e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-05-29T19:07:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-6ac2a85a"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-82da0571"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-04-17T06:17:06.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" } ] } } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` ",True,"ec2_ami_find is_public option appears to have the opposite effect on searches - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami_find ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running from: OS X 10.11.4 Managing: N/A ##### SUMMARY ec2_ami_find's `is_public` appears to have the opposite effect on searches. ##### STEPS TO REPRODUCE Run the following playbook. ``` --- - name: Find base ubuntu image hosts: localhost gather_facts: no tasks: - name: Find latest Ubuntu AMI ec2_ami_find: name: ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-*"" owner: ""099720109477"" region: us-west-2 sort: name sort_order: descending is_public: no register: ami - debug: var=ami ``` ##### EXPECTED RESULTS No AMIs should be found. ##### ACTUAL RESULTS Several public AMIs are found. ``` No config file found; using defaults Loaded callback default of type stdout, v2.0 PLAYBOOK: base-image-playbook.yml ********************************************** 1 plays in base-image-playbook.yml PLAY [Find base ubuntu image] ************************************************** TASK [Find latest Ubuntu AMI] ************************************************** task path: /Users/mike/courseload/ansible/aws/base-amis/base-image-playbook.yml:7 ESTABLISH LOCAL CONNECTION FOR USER: mike localhost EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772 `"" )' localhost PUT /var/folders/_r/5z5t3kln6yn8r3jgrbyvbb6r0000gn/T/tmpADAGwX TO /Users/mike/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772/ec2_ami_find localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/mike/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772/ec2_ami_find; rm -rf ""/Users/mike/.ansible/tmp/ansible-tmp-1463072481.44-30922191967772/"" > /dev/null 2>&1' ok: [localhost] => {""changed"": false, ""invocation"": {""module_args"": {""ami_id"": null, ""ami_tags"": null, ""architecture"": null, ""aws_access_key"": null, ""aws_secret_key"": null, ""ec2_url"": null, ""hypervisor"": null, ""is_public"": false, ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-*"", ""no_result_action"": ""success"", ""owner"": ""099720109477"", ""platform"": null, ""profile"": null, ""region"": ""us-west-2"", ""security_token"": null, ""sort"": ""name"", ""sort_end"": null, ""sort_order"": ""descending"", ""sort_start"": null, ""sort_tag"": null, ""state"": ""available"", ""validate_certs"": true, ""virtualization_type"": null}, ""module_name"": ""ec2_ami_find""}, ""results"": [{""ami_id"": ""ami-0dc73a6d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df2c1224"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-05-10T11:55:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-3f9f6b5f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-50548f0b"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-04-06T22:01:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-8ba74eeb"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b9bbc2e0"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-03-15T08:30:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-412dcf21"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-22204477"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-02-23T13:52:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-6f22c10f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-275e087e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-02-18T01:36:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-a1c721c1"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d0c42c92"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-02-02T12:11:03.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-401ffa20"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d8efa0f"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-01-20T12:14:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-9cbea4fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5dd4561f"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2016-01-15T22:10:59.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-47465826"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e5bd36a7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-12-20T01:45:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-28879849"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-733e3421"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-12-18T09:40:13.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e54f5f84"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-43091c01"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-11-18T08:16:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-06a4b367"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d18d8789"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-11-06T23:02:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-96e605a5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f907c8a5"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-20T09:03:36.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-281eff1b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-21fecb62"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-15T20:43:20.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-20d73013"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e08d28a6"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-09T08:47:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e0aa4dd3"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d9da826"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-08T08:57:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-16e30725"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f4006fa4"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-06T09:25:57.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-09f8e239"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9a178cc2"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-10-01T11:23:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-9bbea5ab"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-367f3474"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-29T11:13:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-1de9f12d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-6414e627"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-25T13:37:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-c53c21f5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b75f10e7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-09T05:58:40.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e91605d9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-96f011c7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-09-02T05:14:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-bbd5c08b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5a8dfa19"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-15T08:11:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-b5736685"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-45cf0117"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-14T07:14:08.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-539d9763"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d39c40b"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-13T08:49:30.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-c5616bf5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-11898c4d"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-11T11:43:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-bf868e8f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7adde43a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-06T19:28:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-a56d6495"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-53c3440c"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-08-01T09:04:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-d9f5fae9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d5874884"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-07-28T07:13:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-e5b8b4d5"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ff17f1bd"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-07-25T05:06:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-975b5da7"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e66090bc"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-30T09:14:05.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-ade1d99d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-48daf60e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-09T17:18:18.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-7b0d354b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-14f47d54"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-09T03:53:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-0f675e3f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df61a59d"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-06-04T08:13:14.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-65e8d755"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e39793be"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-05-29T10:14:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-d986b7e9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-84189ad9"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-05-07T08:13:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-67526757"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-58822d1a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-04-17T12:05:37.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-595b7069"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f15377ab"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-04-09T08:51:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-7f89a64f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9108fdce"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-03-25T23:55:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-cb1536fb"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c78bb49e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-03-05T11:02:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-fd3818cd"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-fc5fcba9"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-02-28T07:14:48.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-0ba6873b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-30ef3d72"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-02-26T06:02:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-990a2fa9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-03180082"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-02-10T05:22:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-3bebb50b"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-2b7385aa"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2015-01-23T10:56:41.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-978dd9a7"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f7f8df7a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-11-26T01:49:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-cd5311fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c5762a0c"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-30T00:18:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-efd497df"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-a8044961"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-26T16:04:28.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-2799da17"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-eb3a4822"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-25T14:06:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-cf397aff"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7e9cf9b7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-09-23T14:15:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-2d9add1d"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ac55ed5a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-08-29T18:45:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-f1ce8bc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-edd3e31a"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-08-16T22:56:58.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-7f6e2b4f"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4085d5b7"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-08-14T00:49:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-9986fea9"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f1898a05"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-07-25T08:28:27.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-f34032c3"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d04a288"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-06-08T00:30:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-f13e4dc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9cdce36e"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-05-29T19:07:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}, {""ami_id"": ""ami-6ac2a85a"", ""architecture"": ""x86_64"", ""block_device_mapping"": {""/dev/sda1"": {""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-82da0571"", ""volume_type"": ""standard""}, ""/dev/sdb"": {""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null}}, ""creationDate"": ""2014-04-17T06:17:06.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual""}]} TASK [debug] ******************************************************************* task path: /Users/mike/courseload/ansible/aws/base-amis/base-image-playbook.yml:17 ok: [localhost] => { ""ami"": { ""changed"": false, ""results"": [ { ""ami_id"": ""ami-0dc73a6d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df2c1224"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-05-10T11:55:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160509.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-3f9f6b5f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-50548f0b"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-04-06T22:01:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160406"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-8ba74eeb"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b9bbc2e0"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-03-15T08:30:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160314"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-412dcf21"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-22204477"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-02-23T13:52:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160222"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-6f22c10f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-275e087e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-02-18T01:36:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160217.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-a1c721c1"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d0c42c92"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-02-02T12:11:03.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160201"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-401ffa20"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d8efa0f"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-01-20T12:14:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160119"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-9cbea4fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5dd4561f"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2016-01-15T22:10:59.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20160114.5"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-47465826"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e5bd36a7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-12-20T01:45:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151218"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-28879849"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-733e3421"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-12-18T09:40:13.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151217"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e54f5f84"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-43091c01"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-11-18T08:16:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151117"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-06a4b367"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d18d8789"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-11-06T23:02:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151105"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-96e605a5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f907c8a5"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-20T09:03:36.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151019"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-281eff1b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-21fecb62"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-15T20:43:20.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151015"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-20d73013"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e08d28a6"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-09T08:47:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151008"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e0aa4dd3"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d9da826"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-08T08:57:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151007"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-16e30725"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f4006fa4"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-06T09:25:57.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20151005"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-09f8e239"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9a178cc2"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-10-01T11:23:38.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150930"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-9bbea5ab"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-367f3474"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-29T11:13:09.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150928"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-1de9f12d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-6414e627"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-25T13:37:12.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-c53c21f5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-b75f10e7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-09T05:58:40.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150908"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e91605d9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-96f011c7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-09-02T05:14:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150901.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-bbd5c08b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-5a8dfa19"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-15T08:11:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150814"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-b5736685"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-45cf0117"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-14T07:14:08.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-539d9763"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4d39c40b"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-13T08:49:30.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150812"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-c5616bf5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-11898c4d"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-11T11:43:15.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150810"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-bf868e8f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7adde43a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-06T19:28:39.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150805"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-a56d6495"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-53c3440c"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-08-01T09:04:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150731"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-d9f5fae9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-d5874884"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-07-28T07:13:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150727"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-e5b8b4d5"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ff17f1bd"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-07-25T05:06:19.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-975b5da7"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e66090bc"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-30T09:14:05.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150629"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-ade1d99d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-48daf60e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-09T17:18:18.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150609"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-7b0d354b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-14f47d54"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-09T03:53:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150608"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-0f675e3f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-df61a59d"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-06-04T08:13:14.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150603"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-65e8d755"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-e39793be"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-05-29T10:14:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-d986b7e9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-84189ad9"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-05-07T08:13:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150506"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-67526757"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-58822d1a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-04-17T12:05:37.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150417"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-595b7069"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f15377ab"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-04-09T08:51:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150408.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-7f89a64f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9108fdce"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-03-25T23:55:53.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150325"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-cb1536fb"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c78bb49e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-03-05T11:02:22.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150305"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-fd3818cd"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-fc5fcba9"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-02-28T07:14:48.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150227.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-0ba6873b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-30ef3d72"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-02-26T06:02:31.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150225.2"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-990a2fa9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-03180082"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-02-10T05:22:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150209.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-3bebb50b"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-2b7385aa"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2015-01-23T10:56:41.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20150123"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-978dd9a7"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f7f8df7a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-11-26T01:49:55.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20141125"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-cd5311fd"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-c5762a0c"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-30T00:18:42.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140927"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-efd497df"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-a8044961"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-26T16:04:28.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140926"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-2799da17"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-eb3a4822"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-25T14:06:50.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140924"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-cf397aff"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7e9cf9b7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-09-23T14:15:01.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140923"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-2d9add1d"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-ac55ed5a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-08-29T18:45:35.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140829"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-f1ce8bc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-edd3e31a"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-08-16T22:56:58.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140816"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-7f6e2b4f"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-4085d5b7"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-08-14T00:49:54.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140813"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-9986fea9"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-f1898a05"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-07-25T08:28:27.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140724"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-f34032c3"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-7d04a288"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-06-08T00:30:16.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140607.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-f13e4dc1"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-9cdce36e"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-05-29T19:07:23.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140528"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" }, { ""ami_id"": ""ami-6ac2a85a"", ""architecture"": ""x86_64"", ""block_device_mapping"": { ""/dev/sda1"": { ""delete_on_termination"": true, ""encrypted"": false, ""size"": 8, ""snapshot_id"": ""snap-82da0571"", ""volume_type"": ""standard"" }, ""/dev/sdb"": { ""delete_on_termination"": false, ""encrypted"": null, ""size"": null, ""snapshot_id"": null, ""volume_type"": null } }, ""creationDate"": ""2014-04-17T06:17:06.000Z"", ""description"": null, ""hypervisor"": ""xen"", ""is_public"": true, ""kernel_id"": ""aki-fc8f11cc"", ""location"": ""099720109477/ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""name"": ""ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-20140416.1"", ""owner_id"": ""099720109477"", ""platform"": null, ""root_device_name"": ""/dev/sda1"", ""root_device_type"": ""ebs"", ""state"": ""available"", ""tags"": {}, ""virtualization_type"": ""paravirtual"" } ] } } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` ",1, ami find is public option appears to have the opposite effect on searches issue type bug report component name ami find ansible version ansible configuration os environment running from os x managing n a summary ami find s is public appears to have the opposite effect on searches steps to reproduce run the following playbook name find base ubuntu image hosts localhost gather facts no tasks name find latest ubuntu ami ami find name ubuntu images ebs ubuntu trusty server owner region us west sort name sort order descending is public no register ami debug var ami expected results no amis should be found actual results several public amis are found no config file found using defaults loaded callback default of type stdout playbook base image playbook yml plays in base image playbook yml play task task path users mike courseload ansible aws base amis base image playbook yml establish local connection for user mike localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put var folders r t tmpadagwx to users mike ansible tmp ansible tmp ami find localhost exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users mike ansible tmp ansible tmp ami find rm rf users mike ansible tmp ansible tmp dev null ok changed false invocation module args ami id null ami tags null architecture null aws access key null aws secret key null url null hypervisor null is public false name ubuntu images ebs ubuntu trusty server no result action success owner platform null profile null region us west security token null sort name sort end null sort order descending sort start null sort tag null state available validate certs true virtualization type null module name ami find results task task path users mike courseload ansible aws base amis base image playbook yml ok ami changed false results ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual ami id ami architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type standard dev sdb delete on termination false encrypted null size null snapshot id null volume type null creationdate description null hypervisor xen is public true kernel id aki location ubuntu images ebs ubuntu trusty server name ubuntu images ebs ubuntu trusty server owner id platform null root device name dev root device type ebs state available tags virtualization type paravirtual play recap localhost ok changed unreachable failed ,1 1878,6577505211.0,IssuesEvent,2017-09-12 01:22:59,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Support for Shell Commands in Check Mode,affects_2.1 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME command.py ##### ANSIBLE VERSION ``` ansible 2.1.0 (devel 5fdac707fd) last updated 2016/03/29 10:45:16 (GMT +000) lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/29 10:45:34 (GMT +000) lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/29 10:45:53 (GMT +000) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ansible has a feature known as check mode. I have a requirement whereby when this mode is run all shell commands would be logged as they would be run during a real deployment. Currently the shell module does not support check mode and just skips the task. I got around this by forking the command.py to allow check mode to run and log the command on the remote host to a temporary file [command.py](https://gist.github.com/philltomlinson/46796d5759c6f78180e62857df6ed3e7) (with comments ""change here""). As the playbook ends I then collect this file from each host using the fetch.py, by removing the if condition in check mode so that it runs and collects the remote command file [fetch.py](https://gist.github.com/philltomlinson/0f729aa9a37c492852d8dc857f990ad8) (this file is in the main ansible repo). However this was a quick fix in order to solve the problem I had. I wondered if: 1. This would be a feature that would be useful to others? 2. Is there a better level this change could be made (at the module execution level)? I put the change in the command.py module directly so I would know the host that the command was run on and any ansible variables passed to the shell command have already been correctly substituted. ##### STEPS TO REPRODUCE Run Ansible in check mode with a shell task. Check remote host for command file under /tmp, however the current gist will need a environment variable set to (for example): export $command_file_name=""check-mode-commands"" ##### EXPECTED RESULTS This will produce files in the expected format, with three columns, datetime, hostname and command that would have run. These files will be on the remote hosts filesystem which we then collect. ``` 09:37:31.305791 host echo ""hello"" 09:37:38.549812 host echo ""hello again"" ``` ##### ACTUAL RESULTS N/A ",True,"Support for Shell Commands in Check Mode - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME command.py ##### ANSIBLE VERSION ``` ansible 2.1.0 (devel 5fdac707fd) last updated 2016/03/29 10:45:16 (GMT +000) lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/29 10:45:34 (GMT +000) lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/29 10:45:53 (GMT +000) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ansible has a feature known as check mode. I have a requirement whereby when this mode is run all shell commands would be logged as they would be run during a real deployment. Currently the shell module does not support check mode and just skips the task. I got around this by forking the command.py to allow check mode to run and log the command on the remote host to a temporary file [command.py](https://gist.github.com/philltomlinson/46796d5759c6f78180e62857df6ed3e7) (with comments ""change here""). As the playbook ends I then collect this file from each host using the fetch.py, by removing the if condition in check mode so that it runs and collects the remote command file [fetch.py](https://gist.github.com/philltomlinson/0f729aa9a37c492852d8dc857f990ad8) (this file is in the main ansible repo). However this was a quick fix in order to solve the problem I had. I wondered if: 1. This would be a feature that would be useful to others? 2. Is there a better level this change could be made (at the module execution level)? I put the change in the command.py module directly so I would know the host that the command was run on and any ansible variables passed to the shell command have already been correctly substituted. ##### STEPS TO REPRODUCE Run Ansible in check mode with a shell task. Check remote host for command file under /tmp, however the current gist will need a environment variable set to (for example): export $command_file_name=""check-mode-commands"" ##### EXPECTED RESULTS This will produce files in the expected format, with three columns, datetime, hostname and command that would have run. These files will be on the remote hosts filesystem which we then collect. ``` 09:37:31.305791 host echo ""hello"" 09:37:38.549812 host echo ""hello again"" ``` ##### ACTUAL RESULTS N/A ",1,support for shell commands in check mode issue type feature idea component name command py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt configuration n a os environment n a summary ansible has a feature known as check mode i have a requirement whereby when this mode is run all shell commands would be logged as they would be run during a real deployment currently the shell module does not support check mode and just skips the task i got around this by forking the command py to allow check mode to run and log the command on the remote host to a temporary file with comments change here as the playbook ends i then collect this file from each host using the fetch py by removing the if condition in check mode so that it runs and collects the remote command file this file is in the main ansible repo however this was a quick fix in order to solve the problem i had i wondered if this would be a feature that would be useful to others is there a better level this change could be made at the module execution level i put the change in the command py module directly so i would know the host that the command was run on and any ansible variables passed to the shell command have already been correctly substituted steps to reproduce run ansible in check mode with a shell task check remote host for command file under tmp however the current gist will need a environment variable set to for example export command file name check mode commands expected results this will produce files in the expected format with three columns datetime hostname and command that would have run these files will be on the remote hosts filesystem which we then collect host echo hello host echo hello again actual results n a ,1 888,4553054288.0,IssuesEvent,2016-09-13 02:19:12,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,gc_storage has misnamed arguments in documentation (or code),affects_2.0 bug_report cloud gce P3 waiting_on_maintainer,"**Issue Type:** Bug Report **Ansible Version:** master **Summary:** https://github.com/ansible/ansible-modules-core/blob/f4625a3dd104e245a80ff547deb75f0de880d24f/cloud/google/gc_storage.py The documentation lists ""gcs_access_key"" (line 73), but the code which loads this uses ""gs_access_key"" (line 346, etc). Same for ""gcs_secret_key"".",True,"gc_storage has misnamed arguments in documentation (or code) - **Issue Type:** Bug Report **Ansible Version:** master **Summary:** https://github.com/ansible/ansible-modules-core/blob/f4625a3dd104e245a80ff547deb75f0de880d24f/cloud/google/gc_storage.py The documentation lists ""gcs_access_key"" (line 73), but the code which loads this uses ""gs_access_key"" (line 346, etc). Same for ""gcs_secret_key"".",1,gc storage has misnamed arguments in documentation or code issue type bug report ansible version master summary the documentation lists gcs access key line but the code which loads this uses gs access key line etc same for gcs secret key ,1 1698,6574375979.0,IssuesEvent,2017-09-11 12:39:37,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container succeeds with empty result,affects_2.2 bug_report cloud docker P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION 2.2.0, 2.1.3 ##### SUMMARY The single `ansible_docker_container` key is being scrubbed from the ansible_facts result value, due to the fix for [CVE-2016-3096](http://www.cvedetails.com/cve/CVE-2016-3096/) from issue https://github.com/ansible/ansible/pull/15925. This is due to the return value's fact key having a prefix used for docker connection vars (ansible_docker_). We probably don't want to add exceptions to the CVE fix, so it seems that the only reasonable fix is a breaking change to docker_container's return value (renaming the fact to `docker_container` or something else, or not returning as a fact. ##### STEPS TO REPRODUCE Use the docker_container module in any way on affected versions and look at the output- it returns an empty ansible_facts dict instead of the actual module results. ##### EXPECTED RESULTS Module return value as documented (including the ansible_docker_container subdict and data). ##### ACTUAL RESULTS Empty ansible_facts dictionary.",True,"docker_container succeeds with empty result - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION 2.2.0, 2.1.3 ##### SUMMARY The single `ansible_docker_container` key is being scrubbed from the ansible_facts result value, due to the fix for [CVE-2016-3096](http://www.cvedetails.com/cve/CVE-2016-3096/) from issue https://github.com/ansible/ansible/pull/15925. This is due to the return value's fact key having a prefix used for docker connection vars (ansible_docker_). We probably don't want to add exceptions to the CVE fix, so it seems that the only reasonable fix is a breaking change to docker_container's return value (renaming the fact to `docker_container` or something else, or not returning as a fact. ##### STEPS TO REPRODUCE Use the docker_container module in any way on affected versions and look at the output- it returns an empty ansible_facts dict instead of the actual module results. ##### EXPECTED RESULTS Module return value as documented (including the ansible_docker_container subdict and data). ##### ACTUAL RESULTS Empty ansible_facts dictionary.",1,docker container succeeds with empty result issue type bug report component name docker container ansible version summary the single ansible docker container key is being scrubbed from the ansible facts result value due to the fix for from issue this is due to the return value s fact key having a prefix used for docker connection vars ansible docker we probably don t want to add exceptions to the cve fix so it seems that the only reasonable fix is a breaking change to docker container s return value renaming the fact to docker container or something else or not returning as a fact steps to reproduce use the docker container module in any way on affected versions and look at the output it returns an empty ansible facts dict instead of the actual module results expected results module return value as documented including the ansible docker container subdict and data actual results empty ansible facts dictionary ,1 1131,4998415631.0,IssuesEvent,2016-12-09 19:47:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ini_file documentation falsely states the default for ""create""option is yes",affects_2.2 docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ini_file core module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT N/A ##### SUMMARY http://docs.ansible.com/ansible/ini_file_module.html states the default for ""create"" option is yes. This is false, the default is no, as in the file: /usr/lib/python2.7/dist-packages/ansible/modules/core/files/ini_file.py ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS The documentation should tell the real default OR the default should be yes. (It is a more reasonable default in my opinion) ##### ACTUAL RESULTS ",True,"ini_file documentation falsely states the default for ""create""option is yes - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ini_file core module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT N/A ##### SUMMARY http://docs.ansible.com/ansible/ini_file_module.html states the default for ""create"" option is yes. This is false, the default is no, as in the file: /usr/lib/python2.7/dist-packages/ansible/modules/core/files/ini_file.py ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS The documentation should tell the real default OR the default should be yes. (It is a more reasonable default in my opinion) ##### ACTUAL RESULTS ",1,ini file documentation falsely states the default for create option is yes issue type documentation report component name ini file core module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration nothing special os environment n a summary states the default for create option is yes this is false the default is no as in the file usr lib dist packages ansible modules core files ini file py steps to reproduce expected results the documentation should tell the real default or the default should be yes it is a more reasonable default in my opinion actual results ,1 1668,6574071002.0,IssuesEvent,2017-09-11 11:21:11,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"unarchive: unhelpful error message when ""owner"" does not exist and unarchiving tar.gz",affects_2.3 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY When ""owner"" parameter is set to user that does not exist, and we are extracting tar.gz archive the module fails with unrelated error message: ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory:"" Interesting note: when the file being archived has no extension, ie ""test"" instead of ""test.txt"" below, the issue does not occur. ##### STEPS TO REPRODUCE ``` mkdir /tmp/unarchive_bug cd /tmp/unarchive_bug mkdir unarchived echo ""test"" > test.txt tar cf testfile.tar test.txt gzip -9 testfile.tar cat <<'EOF' > unarchive_test.yml - hosts: localhost tasks: - name: unarchive test unarchive: copy: no src: /tmp/unarchive_bug/testfile.tar.gz dest: /tmp/unarchive_bug/unarchived owner: doesnotexist EOF ansible-playbook -c local unarchive_test.yml ``` ##### EXPECTED RESULTS Failure message that indicates that user ""doesnotexist"" does not exist. ##### ACTUAL RESULTS ``` TASK [unarchive test] ********************************************************************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/unarchive_bug/unarchived"", ""failed"": true, ""gid"": 0, ""group"": ""root"", ""handler"": ""TgzArchive"", ""mode"": ""0755"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/tmp/unarchive_bug/unarchived/test.txt'"", ""owner"": ""root"", ""size"": 21, ""src"": ""/tmp/unarchive_bug/testfile.tar.gz"", ""state"": ""directory"", ""uid"": 0} to retry, use: --limit @/tmp/unarchive_bug/unarchive_test.retry ``` ",True,"unarchive: unhelpful error message when ""owner"" does not exist and unarchiving tar.gz - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME unarchive ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY When ""owner"" parameter is set to user that does not exist, and we are extracting tar.gz archive the module fails with unrelated error message: ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory:"" Interesting note: when the file being archived has no extension, ie ""test"" instead of ""test.txt"" below, the issue does not occur. ##### STEPS TO REPRODUCE ``` mkdir /tmp/unarchive_bug cd /tmp/unarchive_bug mkdir unarchived echo ""test"" > test.txt tar cf testfile.tar test.txt gzip -9 testfile.tar cat <<'EOF' > unarchive_test.yml - hosts: localhost tasks: - name: unarchive test unarchive: copy: no src: /tmp/unarchive_bug/testfile.tar.gz dest: /tmp/unarchive_bug/unarchived owner: doesnotexist EOF ansible-playbook -c local unarchive_test.yml ``` ##### EXPECTED RESULTS Failure message that indicates that user ""doesnotexist"" does not exist. ##### ACTUAL RESULTS ``` TASK [unarchive test] ********************************************************************************************************************** fatal: [localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/unarchive_bug/unarchived"", ""failed"": true, ""gid"": 0, ""group"": ""root"", ""handler"": ""TgzArchive"", ""mode"": ""0755"", ""msg"": ""Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/tmp/unarchive_bug/unarchived/test.txt'"", ""owner"": ""root"", ""size"": 21, ""src"": ""/tmp/unarchive_bug/testfile.tar.gz"", ""state"": ""directory"", ""uid"": 0} to retry, use: --limit @/tmp/unarchive_bug/unarchive_test.retry ``` ",1,unarchive unhelpful error message when owner does not exist and unarchiving tar gz issue type bug report component name unarchive ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary when owner parameter is set to user that does not exist and we are extracting tar gz archive the module fails with unrelated error message unexpected error when accessing exploded file no such file or directory interesting note when the file being archived has no extension ie test instead of test txt below the issue does not occur steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used mkdir tmp unarchive bug cd tmp unarchive bug mkdir unarchived echo test test txt tar cf testfile tar test txt gzip testfile tar cat unarchive test yml hosts localhost tasks name unarchive test unarchive copy no src tmp unarchive bug testfile tar gz dest tmp unarchive bug unarchived owner doesnotexist eof ansible playbook c local unarchive test yml expected results failure message that indicates that user doesnotexist does not exist actual results task fatal failed changed false dest tmp unarchive bug unarchived failed true gid group root handler tgzarchive mode msg unexpected error when accessing exploded file no such file or directory tmp unarchive bug unarchived test txt owner root size src tmp unarchive bug testfile tar gz state directory uid to retry use limit tmp unarchive bug unarchive test retry ,1 1834,6577362926.0,IssuesEvent,2017-09-12 00:23:14,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 module tracebacks on cred failure,affects_2.0 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = /usr/lib/python2.7/site-packages/awx/plugins/library ``` ##### CONFIGURATION Nothing of note. ##### OS / ENVIRONMENT CentOS 7. ##### SUMMARY ec2 module tracebacks when using an invalidated credential. ##### STEPS TO REPRODUCE 1. Have a basic provisioning task: ``` - hosts: localhost connection: local gather_facts: False vars_files: - group_vars/all tasks: - name: Launch some instances ec2: > access_key=""{{ ec2_access_key }}"" secret_key=""{{ ec2_secret_key }}"" keypair=""{{ ec2_keypair }}"" group=""{{ ec2_security_group }}"" type=""{{ ec2_instance_type }}"" image=""{{ ec2_image }}"" region=""{{ ec2_region }}"" instance_tags=""{'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ demo_tag_name }}'}"" count=""{{ ec2_instance_count }}"" wait=true register: ec2 ``` 1. Have a AWS key that's been disabled. 2. Run the playbook ##### EXPECTED RESULTS Clean failure with a 'permission denied' error. ##### ACTUAL RESULTS ``` TASK [Launch some instances] *************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AuthFailureAWS was not able to validate the provided access credentials0d83c70d-523b-49ab-9a48-0a1d8eca26e6 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last): File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1464811193.49-144253552378267/ec2\"", line 3628, in main() File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1464811193.49-144253552378267/ec2\"", line 1413, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1464811193.49-144253552378267/ec2\"", line 898, in create_instances grp_details = ec2.get_all_security_groups() File \""/var/lib/awx/venv/ansible/lib/python2.7/site-packages/boto/ec2/connection.py\"", line 2969, in get_all_security_groups [('item', SecurityGroup)], verb='POST') File \""/var/lib/awx/venv/ansible/lib/python2.7/site-packages/boto/connection.py\"", line 1182, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized AuthFailureAWS was not able to validate the provided access credentials0d83c70d-523b-49ab-9a48-0a1d8eca26e6 "", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"ec2 module tracebacks on cred failure - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2 module ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = /usr/lib/python2.7/site-packages/awx/plugins/library ``` ##### CONFIGURATION Nothing of note. ##### OS / ENVIRONMENT CentOS 7. ##### SUMMARY ec2 module tracebacks when using an invalidated credential. ##### STEPS TO REPRODUCE 1. Have a basic provisioning task: ``` - hosts: localhost connection: local gather_facts: False vars_files: - group_vars/all tasks: - name: Launch some instances ec2: > access_key=""{{ ec2_access_key }}"" secret_key=""{{ ec2_secret_key }}"" keypair=""{{ ec2_keypair }}"" group=""{{ ec2_security_group }}"" type=""{{ ec2_instance_type }}"" image=""{{ ec2_image }}"" region=""{{ ec2_region }}"" instance_tags=""{'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ demo_tag_name }}'}"" count=""{{ ec2_instance_count }}"" wait=true register: ec2 ``` 1. Have a AWS key that's been disabled. 2. Run the playbook ##### EXPECTED RESULTS Clean failure with a 'permission denied' error. ##### ACTUAL RESULTS ``` TASK [Launch some instances] *************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AuthFailureAWS was not able to validate the provided access credentials0d83c70d-523b-49ab-9a48-0a1d8eca26e6 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last): File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1464811193.49-144253552378267/ec2\"", line 3628, in main() File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1464811193.49-144253552378267/ec2\"", line 1413, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File \""/var/lib/awx/.ansible/tmp/ansible-tmp-1464811193.49-144253552378267/ec2\"", line 898, in create_instances grp_details = ec2.get_all_security_groups() File \""/var/lib/awx/venv/ansible/lib/python2.7/site-packages/boto/ec2/connection.py\"", line 2969, in get_all_security_groups [('item', SecurityGroup)], verb='POST') File \""/var/lib/awx/venv/ansible/lib/python2.7/site-packages/boto/connection.py\"", line 1182, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized AuthFailureAWS was not able to validate the provided access credentials0d83c70d-523b-49ab-9a48-0a1d8eca26e6 "", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1, module tracebacks on cred failure issue type bug report component name module ansible version ansible config file etc ansible ansible cfg configured module search path usr lib site packages awx plugins library configuration nothing of note os environment centos summary module tracebacks when using an invalidated credential steps to reproduce have a basic provisioning task hosts localhost connection local gather facts false vars files group vars all tasks name launch some instances access key access key secret key secret key keypair keypair group security group type instance type image image region region instance tags type instance type group security group name demo demo tag name count instance count wait true register have a aws key that s been disabled run the playbook expected results clean failure with a permission denied error actual results task an exception occurred during task execution to see the full traceback use vvv the error was authfailure aws was not able to validate the provided access credentials fatal failed changed false failed true module stderr traceback most recent call last file var lib awx ansible tmp ansible tmp line in main file var lib awx ansible tmp ansible tmp line in main instance dict array new instance ids changed create instances module vpc file var lib awx ansible tmp ansible tmp line in create instances grp details get all security groups file var lib awx venv ansible lib site packages boto connection py line in get all security groups verb post file var lib awx venv ansible lib site packages boto connection py line in get list raise self responseerror response status response reason body boto exception unauthorized authfailure aws was not able to validate the provided access credentials module stdout msg module failure parsed false ,1 1912,6577573416.0,IssuesEvent,2017-09-12 01:51:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Create custom VM with vsphere_guest on specific folder and resource_pool,affects_2.3 cloud feature_idea vmware waiting_on_maintainer,"##### ISSUE TYPE Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY I need a way to create a custom virtual machine (VM) on specific folder and resource pool. I'm trying with the examples of official docs but the VM always was created outside of any resource pool and on cluster folder...then I was reviewed the sourcecode and when you use `state: powered_on` you can't specify the option resource_pool. The option resource_pool only was available when you use clone from template. ",True,"Create custom VM with vsphere_guest on specific folder and resource_pool - ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION N/A ##### SUMMARY I need a way to create a custom virtual machine (VM) on specific folder and resource pool. I'm trying with the examples of official docs but the VM always was created outside of any resource pool and on cluster folder...then I was reviewed the sourcecode and when you use `state: powered_on` you can't specify the option resource_pool. The option resource_pool only was available when you use clone from template. ",1,create custom vm with vsphere guest on specific folder and resource pool issue type feature idea component name vsphere guest ansible version n a summary i need a way to create a custom virtual machine vm on specific folder and resource pool i m trying with the examples of official docs but the vm always was created outside of any resource pool and on cluster folder then i was reviewed the sourcecode and when you use state powered on you can t specify the option resource pool the option resource pool only was available when you use clone from template ,1 1882,6577511061.0,IssuesEvent,2017-09-12 01:25:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Make add_host less verbose,affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME add_host ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] remote_tmp = $HOME/.ansible/tmp roles_path = /etc/ansible/roles inventory = inventory host_key_checking = False ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S jinja2_extensions = jinja2.ext.do [privilege_escalation] become = True become_method = sudo become_user = root become_ask_pass = False [paramiko_connection] [ssh_connection] pipelining = True scp_if_ssh = True ssh_args = -F ssh_config [accelerate] [selinux] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When running `add_host` I get a ton of output on my shell. I don't see any reasons for this verbose output. ##### STEPS TO REPRODUCE ``` add_host: name: foobar ``` ##### EXPECTED RESULTS Not a ton of output ##### ACTUAL RESULTS This is the output... Without -vvv for a single server on OpenStack > ok: [localhost] => (item={'_ansible_no_log': False, u'changed': False, u'server': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, 'item': u'singlebox', 'invocation': {'module_name': u'os_server', u'module_args': {u'auth_type': None, u'availability_zone': None, u'image': u'Ubuntu 14.04 foo-cloudimg amd64', u'image_exclude': u'(deprecated)', u'flavor_include': None, u'meta': None, u'flavor': u'm1.micro', u'security_groups': [u'default', u'default'], u'boot_from_volume': False, u'userdata': u'#cloud-config\nsystem_info:\n default_user:\n name: foostaff\n home: /home/foostaff\n shell: /bin/bash\n lock_passwd: True\n gecos: foostaff\n sudo: [""ALL=(ALL) NOPASSWD:ALL""]\nruncmd:\n - [ mkdir, -p, ""/home/foostaff/.ssh"" ]\n - ""wget \'https://gitlab.foo.de/security/foostaff-keys/raw/master/authorized_keys\' -O - > /home/foostaff/.ssh/authorized_keys -q -t 5 -T 300""\n - [ chmod, 700, ""/home/foostaff/.ssh"" ]\n - [ chmod, 600, ""/home/foostaff/.ssh/authorized_keys"" ]\n - [ chown, -R, foostaff, ""/home/foostaff/.ssh/"" ]\n', u'network': None, u'nics': [{u'net-name': u'internal'}], u'floating_ips': None, u'flavor_ram': None, u'volume_size': False, u'state': u'present', u'auto_ip': True, u'cloud': None, u'floating_ip_pools': [u'float1'], u'region_name': None, u'key_name': u'username', u'api_timeout': None, u'auth': None, u'endpoint_type': u'public', u'boot_volume': None, u'key': None, u'cacert': None, u'terminate_volume': False, u'wait': True, u'name': u'singlebox', u'timeout': 180, u'cert': None, u'volumes': [], u'verify': True, u'config_drive': False}}, u'openstack': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35'}) ",True,"Make add_host less verbose - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME add_host ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] remote_tmp = $HOME/.ansible/tmp roles_path = /etc/ansible/roles inventory = inventory host_key_checking = False ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S jinja2_extensions = jinja2.ext.do [privilege_escalation] become = True become_method = sudo become_user = root become_ask_pass = False [paramiko_connection] [ssh_connection] pipelining = True scp_if_ssh = True ssh_args = -F ssh_config [accelerate] [selinux] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When running `add_host` I get a ton of output on my shell. I don't see any reasons for this verbose output. ##### STEPS TO REPRODUCE ``` add_host: name: foobar ``` ##### EXPECTED RESULTS Not a ton of output ##### ACTUAL RESULTS This is the output... Without -vvv for a single server on OpenStack > ok: [localhost] => (item={'_ansible_no_log': False, u'changed': False, u'server': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, 'item': u'singlebox', 'invocation': {'module_name': u'os_server', u'module_args': {u'auth_type': None, u'availability_zone': None, u'image': u'Ubuntu 14.04 foo-cloudimg amd64', u'image_exclude': u'(deprecated)', u'flavor_include': None, u'meta': None, u'flavor': u'm1.micro', u'security_groups': [u'default', u'default'], u'boot_from_volume': False, u'userdata': u'#cloud-config\nsystem_info:\n default_user:\n name: foostaff\n home: /home/foostaff\n shell: /bin/bash\n lock_passwd: True\n gecos: foostaff\n sudo: [""ALL=(ALL) NOPASSWD:ALL""]\nruncmd:\n - [ mkdir, -p, ""/home/foostaff/.ssh"" ]\n - ""wget \'https://gitlab.foo.de/security/foostaff-keys/raw/master/authorized_keys\' -O - > /home/foostaff/.ssh/authorized_keys -q -t 5 -T 300""\n - [ chmod, 700, ""/home/foostaff/.ssh"" ]\n - [ chmod, 600, ""/home/foostaff/.ssh/authorized_keys"" ]\n - [ chown, -R, foostaff, ""/home/foostaff/.ssh/"" ]\n', u'network': None, u'nics': [{u'net-name': u'internal'}], u'floating_ips': None, u'flavor_ram': None, u'volume_size': False, u'state': u'present', u'auto_ip': True, u'cloud': None, u'floating_ip_pools': [u'float1'], u'region_name': None, u'key_name': u'username', u'api_timeout': None, u'auth': None, u'endpoint_type': u'public', u'boot_volume': None, u'key': None, u'cacert': None, u'terminate_volume': False, u'wait': True, u'name': u'singlebox', u'timeout': 180, u'cert': None, u'volumes': [], u'verify': True, u'config_drive': False}}, u'openstack': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35'}) ",1,make add host less verbose issue type feature idea component name add host ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration remote tmp home ansible tmp roles path etc ansible roles inventory inventory host key checking false ansible managed ansible managed file modified on y m d h m s extensions ext do become true become method sudo become user root become ask pass false pipelining true scp if ssh true ssh args f ssh config os environment n a summary when running add host i get a ton of output on my shell i don t see any reasons for this verbose output steps to reproduce add host name foobar expected results not a ton of output actual results this is the output without vvv for a single server on openstack ok item ansible no log false u changed false u server u os ext sts task state none u addresses u internal u image u id u u os ext sts vm state u active u os srv usg launched at u u name attr u name u flavor u id u u name u micro u az u nova u id u u security groups u name u default u user id u u os dcf diskconfig u manual u networks u internal u u u u u cloud u envvars u key name u username u progress u os ext sts power state u interface ip u u config drive u u status u active u updated u u hostid u u human id true u os srv usg terminated at none u public u u public u u private u u os ext az availability zone u nova u name u singlebox u created u u tenant id u u region u regionone u os extended volumes volumes attached u volumes u metadata u human id u singlebox item u singlebox invocation module name u os server u module args u auth type none u availability zone none u image u ubuntu foo cloudimg u image exclude u deprecated u flavor include none u meta none u flavor u micro u security groups u boot from volume false u userdata u cloud config nsystem info n default user n name foostaff n home home foostaff n shell bin bash n lock passwd true n gecos foostaff n sudo nruncmd n n wget o home foostaff ssh authorized keys q t t n n n n u network none u nics u floating ips none u flavor ram none u volume size false u state u present u auto ip true u cloud none u floating ip pools u region name none u key name u username u api timeout none u auth none u endpoint type u public u boot volume none u key none u cacert none u terminate volume false u wait true u name u singlebox u timeout u cert none u volumes u verify true u config drive false u openstack u os ext sts task state none u addresses u internal u image u id u u os ext sts vm state u active u os srv usg launched at u u name attr u name u flavor u id u u name u micro u az u nova u id u u security groups u name u default u user id u u os dcf diskconfig u manual u networks u internal u u u u u cloud u envvars u key name u username u progress u os ext sts power state u interface ip u u config drive u u status u active u updated u u hostid u u human id true u os srv usg terminated at none u public u u public u u private u u os ext az availability zone u nova u name u singlebox u created u u tenant id u u region u regionone u os extended volumes volumes attached u volumes u metadata u human id u singlebox u id u ,1 1299,5541703969.0,IssuesEvent,2017-03-22 13:32:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Ad hoc usage of ec2_tag module results in AttributeError: 'str' object has no attribute 'items',affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### Issue Type: - Bug Report ##### Component Name: ec2_tag module ##### Ansible Version: ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: When trying to set a tag using module 'ec2_tag', set fails with an error. I have tried several different variants to escape ' or "" and to use ' or "" to surround the tags field and for the entire -a attributes value field. I stepped through the code and the input tags= value is always a string and not parsed as a dict properly. ##### Steps To Reproduce: These, and variants of, all yield the same parse error: ``` ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 tags={\""Name\"":\""foo\""}' -vvv ansible 'localhost' -m ec2_tag -a ""resource=i-074lkeke region=us-west-2 tags='{\""Name\"":\""server1\""}'"" -vvv ``` ##### Expected Results: Success > New Tag created for instance. ##### Actual Results: ``` No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: kfletcher 127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `"" )' 127.0.0.1 PUT /var/folders/zt/7_vhqsms595dwgk_my5_y26m0000gn/T/tmpRTl37e TO /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag 127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag; rm -rf ""/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag"", line 2368, in main() File ""/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag"", line 152, in main if set(tags.items()).issubset(set(tagdict.items())): AttributeError: 'str' object has no attribute 'items' localhost | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ec2_tag"" }, ""parsed"": false } ``` FYI, reading the tags works fine: `$ ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 state=list'` ``` localhost | SUCCESS => { ""changed"": false, ""tags"": { ""Name"": ""test"" } } ``` I also got filters (also requires a dict) working using similar syntax: ``` ansible 'localhost' -m ec2_remote_facts -a 'region=us-west-2 filters={\""private-dns-name\"":\""ip-10-13-49-34.us-west-2.compute.internal\""}' ``` ",True,"Ad hoc usage of ec2_tag module results in AttributeError: 'str' object has no attribute 'items' - ##### Issue Type: - Bug Report ##### Component Name: ec2_tag module ##### Ansible Version: ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: When trying to set a tag using module 'ec2_tag', set fails with an error. I have tried several different variants to escape ' or "" and to use ' or "" to surround the tags field and for the entire -a attributes value field. I stepped through the code and the input tags= value is always a string and not parsed as a dict properly. ##### Steps To Reproduce: These, and variants of, all yield the same parse error: ``` ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 tags={\""Name\"":\""foo\""}' -vvv ansible 'localhost' -m ec2_tag -a ""resource=i-074lkeke region=us-west-2 tags='{\""Name\"":\""server1\""}'"" -vvv ``` ##### Expected Results: Success > New Tag created for instance. ##### Actual Results: ``` No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: kfletcher 127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `"" )' 127.0.0.1 PUT /var/folders/zt/7_vhqsms595dwgk_my5_y26m0000gn/T/tmpRTl37e TO /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag 127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag; rm -rf ""/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/"" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag"", line 2368, in main() File ""/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag"", line 152, in main if set(tags.items()).issubset(set(tagdict.items())): AttributeError: 'str' object has no attribute 'items' localhost | FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""ec2_tag"" }, ""parsed"": false } ``` FYI, reading the tags works fine: `$ ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 state=list'` ``` localhost | SUCCESS => { ""changed"": false, ""tags"": { ""Name"": ""test"" } } ``` I also got filters (also requires a dict) working using similar syntax: ``` ansible 'localhost' -m ec2_remote_facts -a 'region=us-west-2 filters={\""private-dns-name\"":\""ip-10-13-49-34.us-west-2.compute.internal\""}' ``` ",1,ad hoc usage of tag module results in attributeerror str object has no attribute items issue type bug report component name tag module ansible version ansible config file configured module search path default w o overrides ansible configuration n a environment n a summary when trying to set a tag using module tag set fails with an error i have tried several different variants to escape or and to use or to surround the tags field and for the entire a attributes value field i stepped through the code and the input tags value is always a string and not parsed as a dict properly steps to reproduce these and variants of all yield the same parse error ansible localhost m tag a resource i region us west tags name foo vvv ansible localhost m tag a resource i region us west tags name vvv expected results success new tag created for instance actual results no config file found using defaults establish local connection for user kfletcher exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders zt t to users kfletcher ansible tmp ansible tmp tag exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users kfletcher ansible tmp ansible tmp tag rm rf users kfletcher ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file users kfletcher ansible tmp ansible tmp tag line in main file users kfletcher ansible tmp ansible tmp tag line in main if set tags items issubset set tagdict items attributeerror str object has no attribute items localhost failed changed false failed true invocation module name tag parsed false fyi reading the tags works fine ansible localhost m tag a resource i region us west state list localhost success changed false tags name test i also got filters also requires a dict working using similar syntax ansible localhost m remote facts a region us west filters private dns name ip us west compute internal ,1