Unnamed: 0,id,type,created_at,repo,repo_url,action,title,labels,body,index,text_combine,label,text,binary_label 1130,4998415563.0,IssuesEvent,2016-12-09 19:47:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,replace: TypeError when run under Python 3,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - files/replace ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (detached HEAD cdec853e37) last updated 2016/12/01 10:23:30 (GMT +200) lib/ansible/modules/core: (detached HEAD fe9c56a003) last updated 2016/12/01 10:24:42 (GMT +200) lib/ansible/modules/extras: (detached HEAD f564e77a08) last updated 2016/12/01 10:24:42 (GMT +200) config file = configured module search path = ['/Users/per/Projects/servers/submodules/ansible/library'] ``` ##### CONFIGURATION - ansible_python_interpreter=/usr/bin/python3 ##### OS / ENVIRONMENT - Local: MacOS - Remote: Ubuntu 16.04, with Python 3.5.2 ##### SUMMARY Replace fails with a TypeError when run under Python 3 ##### STEPS TO REPRODUCE ``` ansible *** -m replace \ -a ""dest=/etc/lsb-release regexp=nomatchfound replace=nomatchfound"" ``` ##### EXPECTED RESULTS No replacement (since regex doesn't match). ##### ACTUAL RESULTS Command fails with error: `TypeError: cannot use a string pattern on a bytes-like object` ``` *** | FAILED! => { ""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to *** closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in \r\n main()\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main\r\n result = re.subn(mre, params['replace'], contents, 0)\r\n File \""/usr/lib/python3.5/re.py\"", line 193, in subn\r\n return _compile(pattern, flags).subn(repl, string, count)\r\nTypeError: cannot use a string pattern on a bytes-like object\r\n"", ""msg"": ""MODULE FAILURE"" } ``` Readable traceback (from above): ``` Traceback (most recent call last): File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in main() File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main result = re.subn(mre, params['replace'], contents, 0) File \""/usr/lib/python3.5/re.py\"", line 193, in subn return _compile(pattern, flags).subn(repl, string, count) TypeError: cannot use a string pattern on a bytes-like object ``` ",True,"replace: TypeError when run under Python 3 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - files/replace ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (detached HEAD cdec853e37) last updated 2016/12/01 10:23:30 (GMT +200) lib/ansible/modules/core: (detached HEAD fe9c56a003) last updated 2016/12/01 10:24:42 (GMT +200) lib/ansible/modules/extras: (detached HEAD f564e77a08) last updated 2016/12/01 10:24:42 (GMT +200) config file = configured module search path = ['/Users/per/Projects/servers/submodules/ansible/library'] ``` ##### CONFIGURATION - ansible_python_interpreter=/usr/bin/python3 ##### OS / ENVIRONMENT - Local: MacOS - Remote: Ubuntu 16.04, with Python 3.5.2 ##### SUMMARY Replace fails with a TypeError when run under Python 3 ##### STEPS TO REPRODUCE ``` ansible *** -m replace \ -a ""dest=/etc/lsb-release regexp=nomatchfound replace=nomatchfound"" ``` ##### EXPECTED RESULTS No replacement (since regex doesn't match). ##### ACTUAL RESULTS Command fails with error: `TypeError: cannot use a string pattern on a bytes-like object` ``` *** | FAILED! => { ""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to *** closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in \r\n main()\r\n File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main\r\n result = re.subn(mre, params['replace'], contents, 0)\r\n File \""/usr/lib/python3.5/re.py\"", line 193, in subn\r\n return _compile(pattern, flags).subn(repl, string, count)\r\nTypeError: cannot use a string pattern on a bytes-like object\r\n"", ""msg"": ""MODULE FAILURE"" } ``` Readable traceback (from above): ``` Traceback (most recent call last): File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 180, in main() File \""/tmp/ansible_1wjonocj/ansible_module_replace.py\"", line 153, in main result = re.subn(mre, params['replace'], contents, 0) File \""/usr/lib/python3.5/re.py\"", line 193, in subn return _compile(pattern, flags).subn(repl, string, count) TypeError: cannot use a string pattern on a bytes-like object ``` ",1,replace typeerror when run under python issue type bug report component name files replace ansible version ansible detached head last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path configuration ansible python interpreter usr bin os environment local macos remote ubuntu with python summary replace fails with a typeerror when run under python steps to reproduce ansible m replace a dest etc lsb release regexp nomatchfound replace nomatchfound expected results no replacement since regex doesn t match actual results command fails with error typeerror cannot use a string pattern on a bytes like object failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module replace py line in r n main r n file tmp ansible ansible module replace py line in main r n result re subn mre params contents r n file usr lib re py line in subn r n return compile pattern flags subn repl string count r ntypeerror cannot use a string pattern on a bytes like object r n msg module failure readable traceback from above traceback most recent call last file tmp ansible ansible module replace py line in main file tmp ansible ansible module replace py line in main result re subn mre params contents file usr lib re py line in subn return compile pattern flags subn repl string count typeerror cannot use a string pattern on a bytes like object ,1 486742,14013318071.0,IssuesEvent,2020-10-29 10:15:47,netdata/netdata,https://api.github.com/repos/netdata/netdata,closed,Netdata Cloud ECN href tag appears to be formatted incorrectly,area/web bug cloud priority/medium," ##### Bug report summary When you navigate to a specific node in Netdata Cloud and scroll to ECN, the raw HTML is visible instead of a link. ![Screenshot 2020-10-21 121228](https://user-images.githubusercontent.com/63246200/96818398-0e1bad00-13e7-11eb-840d-65d6aca9fc69.png) I inspected the webpage HTML and found that the tag was using < and > instead of < and >. ![Screenshot 2020-10-21 121311](https://user-images.githubusercontent.com/63246200/96818571-533fdf00-13e7-11eb-8206-4c9120ab103a.png) Upon changing the HTML with < and > in the tag, I was able to view the link properly. ![Screenshot 2020-10-21 121829](https://user-images.githubusercontent.com/63246200/96818654-7bc7d900-13e7-11eb-9c78-2a593152567a.png) ![Screenshot 2020-10-21 121901](https://user-images.githubusercontent.com/63246200/96818678-88e4c800-13e7-11eb-81e0-2707b8140cee.png) ##### OS / Environment ``` Linux redacted 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux /etc/centos-release:CentOS Linux release 8.2.2004 (Core) /etc/os-release:NAME=""CentOS Linux"" /etc/os-release:VERSION=""8 (Core)"" /etc/os-release:ID=""centos"" /etc/os-release:ID_LIKE=""rhel fedora"" /etc/os-release:VERSION_ID=""8"" /etc/os-release:PLATFORM_ID=""platform:el8"" /etc/os-release:PRETTY_NAME=""CentOS Linux 8 (Core)"" /etc/os-release:ANSI_COLOR=""0;31"" /etc/os-release:CPE_NAME=""cpe:/o:centos:centos:8"" /etc/os-release:HOME_URL=""https://www.centos.org/"" /etc/os-release:BUG_REPORT_URL=""https://bugs.centos.org/"" /etc/os-release: /etc/os-release:CENTOS_MANTISBT_PROJECT=""CentOS-8"" /etc/os-release:CENTOS_MANTISBT_PROJECT_VERSION=""8"" /etc/os-release:REDHAT_SUPPORT_PRODUCT=""centos"" /etc/os-release:REDHAT_SUPPORT_PRODUCT_VERSION=""8"" /etc/os-release: /etc/redhat-release:CentOS Linux release 8.2.2004 (Core) /etc/system-release:CentOS Linux release 8.2.2004 (Core) ``` ##### Netdata version netdata v1.26.0-43-nightly ##### Component Name Netdata Cloud ##### Steps To Reproduce 1. Open Netdata Cloud in your browser 2. Navigate to one of your nodes 3. Scroll down to ECN 4. You will see raw HTML instead of a link to the Wikipedia article ##### Expected behavior I expected a link to the article to show up instead of raw HTML. I was able to reproduce this on a few other nodes running different netdata versions in Netdata Cloud. This does not affect the netdata web interface at *:19999, only Netdata Cloud.",1.0,"Netdata Cloud ECN href tag appears to be formatted incorrectly - ##### Bug report summary When you navigate to a specific node in Netdata Cloud and scroll to ECN, the raw HTML is visible instead of a link. ![Screenshot 2020-10-21 121228](https://user-images.githubusercontent.com/63246200/96818398-0e1bad00-13e7-11eb-840d-65d6aca9fc69.png) I inspected the webpage HTML and found that the tag was using < and > instead of < and >. ![Screenshot 2020-10-21 121311](https://user-images.githubusercontent.com/63246200/96818571-533fdf00-13e7-11eb-8206-4c9120ab103a.png) Upon changing the HTML with < and > in the tag, I was able to view the link properly. ![Screenshot 2020-10-21 121829](https://user-images.githubusercontent.com/63246200/96818654-7bc7d900-13e7-11eb-9c78-2a593152567a.png) ![Screenshot 2020-10-21 121901](https://user-images.githubusercontent.com/63246200/96818678-88e4c800-13e7-11eb-81e0-2707b8140cee.png) ##### OS / Environment ``` Linux redacted 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux /etc/centos-release:CentOS Linux release 8.2.2004 (Core) /etc/os-release:NAME=""CentOS Linux"" /etc/os-release:VERSION=""8 (Core)"" /etc/os-release:ID=""centos"" /etc/os-release:ID_LIKE=""rhel fedora"" /etc/os-release:VERSION_ID=""8"" /etc/os-release:PLATFORM_ID=""platform:el8"" /etc/os-release:PRETTY_NAME=""CentOS Linux 8 (Core)"" /etc/os-release:ANSI_COLOR=""0;31"" /etc/os-release:CPE_NAME=""cpe:/o:centos:centos:8"" /etc/os-release:HOME_URL=""https://www.centos.org/"" /etc/os-release:BUG_REPORT_URL=""https://bugs.centos.org/"" /etc/os-release: /etc/os-release:CENTOS_MANTISBT_PROJECT=""CentOS-8"" /etc/os-release:CENTOS_MANTISBT_PROJECT_VERSION=""8"" /etc/os-release:REDHAT_SUPPORT_PRODUCT=""centos"" /etc/os-release:REDHAT_SUPPORT_PRODUCT_VERSION=""8"" /etc/os-release: /etc/redhat-release:CentOS Linux release 8.2.2004 (Core) /etc/system-release:CentOS Linux release 8.2.2004 (Core) ``` ##### Netdata version netdata v1.26.0-43-nightly ##### Component Name Netdata Cloud ##### Steps To Reproduce 1. Open Netdata Cloud in your browser 2. Navigate to one of your nodes 3. Scroll down to ECN 4. You will see raw HTML instead of a link to the Wikipedia article ##### Expected behavior I expected a link to the article to show up instead of raw HTML. I was able to reproduce this on a few other nodes running different netdata versions in Netdata Cloud. This does not affect the netdata web interface at *:19999, only Netdata Cloud.",0,netdata cloud ecn href tag appears to be formatted incorrectly when creating a bug report please verify first that your issue is not already reported on github test if the latest release and master branch are affected too bug report summary when you navigate to a specific node in netdata cloud and scroll to ecn the raw html is visible instead of a link i inspected the webpage html and found that the tag was using lt and gt instead of upon changing the html with in the tag i was able to view the link properly os environment provide as much information about your environment which operating system and distribution you re using if netdata is running in a container etc as possible to allow us reproduce this bug faster to get this information execute the following commands based on your operating system uname a grep hv etc release linux uname a uname k bsd uname a sw vers macos place the output from the command in the code section below linux redacted smp mon sep utc gnu linux etc centos release centos linux release core etc os release name centos linux etc os release version core etc os release id centos etc os release id like rhel fedora etc os release version id etc os release platform id platform etc os release pretty name centos linux core etc os release ansi color etc os release cpe name cpe o centos centos etc os release home url etc os release bug report url etc os release etc os release centos mantisbt project centos etc os release centos mantisbt project version etc os release redhat support product centos etc os release redhat support product version etc os release etc redhat release centos linux release core etc system release centos linux release core netdata version provide output of netdata v if netdata is running execute ps aux grep e o netdata v netdata nightly component name let us know which component is affected by the bug our code is structured according to its component so the component name is the same as the top level directory of the repository for example a bug in the dashboard would be under the web component netdata cloud steps to reproduce describe how you found this bug and how we can reproduce it preferably with a minimal test case scenario if you d like to attach larger files use gist github com and paste in links open netdata cloud in your browser navigate to one of your nodes scroll down to ecn you will see raw html instead of a link to the wikipedia article expected behavior i expected a link to the article to show up instead of raw html i was able to reproduce this on a few other nodes running different netdata versions in netdata cloud this does not affect the netdata web interface at only netdata cloud ,0 892,4553457459.0,IssuesEvent,2016-09-13 04:57:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Bad value substitution in ini_file Module with percent placeholder values,affects_1.7 bug_report P3 waiting_on_maintainer,"##### Issue Type: Bug ##### Component Name: ini_file module ##### Ansible Version: 1.7.2 1.8.2 ##### Environment: Mac OSX 10.10.1 Yosemite ##### Summary: I need special values in an ini-file to configure my supervisor-daemon. For example a value looks like this: process_name=%(program_name)s ##### Steps To Reproduce: To reporduce this issue run following playbook twice. ``` --- - hosts: all tasks: - ini_file: dest=""/tmp/tmp.ini"" section=""program:update"" option=""process_name"" value=""%(program_name)s"" ``` ##### Expected Results: After first run everything OK, after second run I get an error and nothing happens. ##### Actual Results: ``` ConfigParser.InterpolationMissingOptionError: Bad value substitution: section: [program:update] option : process_name key : program_name rawval : %(program_name)s ```",True,"Bad value substitution in ini_file Module with percent placeholder values - ##### Issue Type: Bug ##### Component Name: ini_file module ##### Ansible Version: 1.7.2 1.8.2 ##### Environment: Mac OSX 10.10.1 Yosemite ##### Summary: I need special values in an ini-file to configure my supervisor-daemon. For example a value looks like this: process_name=%(program_name)s ##### Steps To Reproduce: To reporduce this issue run following playbook twice. ``` --- - hosts: all tasks: - ini_file: dest=""/tmp/tmp.ini"" section=""program:update"" option=""process_name"" value=""%(program_name)s"" ``` ##### Expected Results: After first run everything OK, after second run I get an error and nothing happens. ##### Actual Results: ``` ConfigParser.InterpolationMissingOptionError: Bad value substitution: section: [program:update] option : process_name key : program_name rawval : %(program_name)s ```",1,bad value substitution in ini file module with percent placeholder values issue type bug component name ini file module ansible version environment mac osx yosemite summary i need special values in an ini file to configure my supervisor daemon for example a value looks like this process name program name s steps to reproduce to reporduce this issue run following playbook twice hosts all tasks ini file dest tmp tmp ini section program update option process name value program name s expected results after first run everything ok after second run i get an error and nothing happens actual results configparser interpolationmissingoptionerror bad value substitution section option process name key program name rawval program name s ,1 259988,22583305056.0,IssuesEvent,2022-06-28 13:26:41,mozilla/addons-frontend,https://api.github.com/repos/mozilla/addons-frontend,closed,Follow-ups for test_installAddon migration to react-testing-library,component: testing qa: not needed priority: p3,"Follow-ups identified during https://github.com/mozilla/addons-frontend/pull/11612: - Once all tests have been migrated, replace old `getFakeAddonManagerWrapper` implementation with `getFakeAddonManagerWrapperWithJest`, which we will then be able to remove. - Refactor `setCurrentStatus` status to use a parameterize style of testing using `it.each()`: > I'm wondering if we could combine all these tests that set up a state and then check that `setInstallState` has been called with a particular `status`, using `it.each()`? It seems like the setup is close to identical, and then the assertion is also nearly the same, just that we are looking for a different `status`, so they seem like a good candidate for a set of parameterized tests. > > (...) the override for `getAddon` would be part of the parameterized data. You'd only have to include what gets returned by the Promise, I think, so the data would look like `{ isActive: true, isEnabled: false }`, for example. I can see how that might be a bit ugly, but I still think getting rid of all of this duplicate code would make sense. - Migrate tests still marked as `FIXME` that modify the add-on after it has been loaded. > It's definitely not going to work by dispatching actions. You'll need to render a page and then either use `onLocationChanged` or dispatch a `loadAddon` (I generally use the former as it's more like what a user would do). The problem is that `addon` is a direct prop of `InstallButtonWrapper`, and we cannot set a direct prop of an already loaded component. We need to have the parent re-render the component with the new prop value, which is what happens when the test runs from a page. This can be tested from either the `Addon` page or the `AddonVersions` page, so these tests should live in the test suite for one of those - Remove `test_installAddon.js` once all of the above is done.",1.0,"Follow-ups for test_installAddon migration to react-testing-library - Follow-ups identified during https://github.com/mozilla/addons-frontend/pull/11612: - Once all tests have been migrated, replace old `getFakeAddonManagerWrapper` implementation with `getFakeAddonManagerWrapperWithJest`, which we will then be able to remove. - Refactor `setCurrentStatus` status to use a parameterize style of testing using `it.each()`: > I'm wondering if we could combine all these tests that set up a state and then check that `setInstallState` has been called with a particular `status`, using `it.each()`? It seems like the setup is close to identical, and then the assertion is also nearly the same, just that we are looking for a different `status`, so they seem like a good candidate for a set of parameterized tests. > > (...) the override for `getAddon` would be part of the parameterized data. You'd only have to include what gets returned by the Promise, I think, so the data would look like `{ isActive: true, isEnabled: false }`, for example. I can see how that might be a bit ugly, but I still think getting rid of all of this duplicate code would make sense. - Migrate tests still marked as `FIXME` that modify the add-on after it has been loaded. > It's definitely not going to work by dispatching actions. You'll need to render a page and then either use `onLocationChanged` or dispatch a `loadAddon` (I generally use the former as it's more like what a user would do). The problem is that `addon` is a direct prop of `InstallButtonWrapper`, and we cannot set a direct prop of an already loaded component. We need to have the parent re-render the component with the new prop value, which is what happens when the test runs from a page. This can be tested from either the `Addon` page or the `AddonVersions` page, so these tests should live in the test suite for one of those - Remove `test_installAddon.js` once all of the above is done.",0,follow ups for test installaddon migration to react testing library follow ups identified during once all tests have been migrated replace old getfakeaddonmanagerwrapper implementation with getfakeaddonmanagerwrapperwithjest which we will then be able to remove refactor setcurrentstatus status to use a parameterize style of testing using it each i m wondering if we could combine all these tests that set up a state and then check that setinstallstate has been called with a particular status using it each it seems like the setup is close to identical and then the assertion is also nearly the same just that we are looking for a different status so they seem like a good candidate for a set of parameterized tests the override for getaddon would be part of the parameterized data you d only have to include what gets returned by the promise i think so the data would look like isactive true isenabled false for example i can see how that might be a bit ugly but i still think getting rid of all of this duplicate code would make sense migrate tests still marked as fixme that modify the add on after it has been loaded it s definitely not going to work by dispatching actions you ll need to render a page and then either use onlocationchanged or dispatch a loadaddon i generally use the former as it s more like what a user would do the problem is that addon is a direct prop of installbuttonwrapper and we cannot set a direct prop of an already loaded component we need to have the parent re render the component with the new prop value which is what happens when the test runs from a page this can be tested from either the addon page or the addonversions page so these tests should live in the test suite for one of those remove test installaddon js once all of the above is done ,0 21928,11660539337.0,IssuesEvent,2020-03-03 03:41:24,cityofaustin/atd-geospatial,https://api.github.com/repos/cityofaustin/atd-geospatial,closed,Data-Informed PHB Ranking,Epic Service: Geo Type: Enhancement Workgroup: AMD,"Email > renee.orr@austintexas.gov Describe the problem. > AMD is working toward a data-based process to identify locations for new PHBs. We would like to investigate the possibility of using existing data developed for Active Trans' Pedestrian Safety Action Plan (PSAP), and revise it to fit our program. Active Trans is also interested in updating this data for the the PSAP. We anticipate needing to coordinate this GIS data with PHB requests in Data Tracker. Also need to discuss the frequency this data can be updated. How soon do you need this? > Flexible — An extended timeline is OK Is there anything else we should know? > Would like to have this process defined and in use by November, so we can meet the stated annual December ranking. I request a scoping meeting be scheduled to discuss this request further. Please include Joel Meyer in the meeting, This process is supported by Jen in response to a request from Jim Dale. Request ID: DTS19-100165 ",1.0,"Data-Informed PHB Ranking - Email > renee.orr@austintexas.gov Describe the problem. > AMD is working toward a data-based process to identify locations for new PHBs. We would like to investigate the possibility of using existing data developed for Active Trans' Pedestrian Safety Action Plan (PSAP), and revise it to fit our program. Active Trans is also interested in updating this data for the the PSAP. We anticipate needing to coordinate this GIS data with PHB requests in Data Tracker. Also need to discuss the frequency this data can be updated. How soon do you need this? > Flexible — An extended timeline is OK Is there anything else we should know? > Would like to have this process defined and in use by November, so we can meet the stated annual December ranking. I request a scoping meeting be scheduled to discuss this request further. Please include Joel Meyer in the meeting, This process is supported by Jen in response to a request from Jim Dale. Request ID: DTS19-100165 ",0,data informed phb ranking email renee orr austintexas gov describe the problem amd is working toward a data based process to identify locations for new phbs we would like to investigate the possibility of using existing data developed for active trans pedestrian safety action plan psap and revise it to fit our program active trans is also interested in updating this data for the the psap we anticipate needing to coordinate this gis data with phb requests in data tracker also need to discuss the frequency this data can be updated how soon do you need this flexible — an extended timeline is ok is there anything else we should know would like to have this process defined and in use by november so we can meet the stated annual december ranking i request a scoping meeting be scheduled to discuss this request further please include joel meyer in the meeting this process is supported by jen in response to a request from jim dale request id ,0 745,4350929598.0,IssuesEvent,2016-07-31 15:23:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Apt module - the possibilty to know if a debian package is present or not ,feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible --version ansible 2.0.2.0 config file = configured module search path = Default w/o override ``` ##### CONFIGURATION No files and no env ##### OS / ENVIRONMENT ``` lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.4 (jessie) Release: 8.4 Codename: jessie ``` ##### SUMMARY Just for obtain an little feature : the possibilty to know if a debian package is present or not . After, if the condition is true, we can register the package version. Because, using the shell module is dirty :+1: ``` - name: test version shell: haproxy -v | awk '$0 ~ /HA-Proxy/ {print$3}' register: haproxyversion tags: - status - name: status of backends shell: echo ""show servers state"" | nc localhost 666 | grep -Ev ""^1|^#|^$"" | awk '{print""frontend:"""" ""$2"" """"backend:"""" ""$4"" """"ip:"""" ""$5"" """"status:"""" ""$6}' register: haproxyout when: haproxyversion.stdout.find('1.6') != -1 tags: - status ``` ##### STEPS TO REPRODUCE It's not a bug ##### EXPECTED RESULTS It's not a bug ##### ACTUAL RESULTS It's not a bug ",True,"Apt module - the possibilty to know if a debian package is present or not - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME apt ##### ANSIBLE VERSION ``` ansible --version ansible 2.0.2.0 config file = configured module search path = Default w/o override ``` ##### CONFIGURATION No files and no env ##### OS / ENVIRONMENT ``` lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.4 (jessie) Release: 8.4 Codename: jessie ``` ##### SUMMARY Just for obtain an little feature : the possibilty to know if a debian package is present or not . After, if the condition is true, we can register the package version. Because, using the shell module is dirty :+1: ``` - name: test version shell: haproxy -v | awk '$0 ~ /HA-Proxy/ {print$3}' register: haproxyversion tags: - status - name: status of backends shell: echo ""show servers state"" | nc localhost 666 | grep -Ev ""^1|^#|^$"" | awk '{print""frontend:"""" ""$2"" """"backend:"""" ""$4"" """"ip:"""" ""$5"" """"status:"""" ""$6}' register: haproxyout when: haproxyversion.stdout.find('1.6') != -1 tags: - status ``` ##### STEPS TO REPRODUCE It's not a bug ##### EXPECTED RESULTS It's not a bug ##### ACTUAL RESULTS It's not a bug ",1,apt module the possibilty to know if a debian package is present or not issue type feature idea component name apt ansible version ansible version ansible config file configured module search path default w o override configuration no files and no env os environment lsb release a no lsb modules are available distributor id debian description debian gnu linux jessie release codename jessie summary just for obtain an little feature the possibilty to know if a debian package is present or not after if the condition is true we can register the package version because using the shell module is dirty name test version shell haproxy v awk ha proxy print register haproxyversion tags status name status of backends shell echo show servers state nc localhost grep ev awk print frontend backend ip status register haproxyout when haproxyversion stdout find tags status steps to reproduce it s not a bug expected results it s not a bug actual results it s not a bug ,1 421587,28326197803.0,IssuesEvent,2023-04-11 07:23:43,opensquare-network/statescan-v2,https://api.github.com/repos/opensquare-network/statescan-v2,opened,"runtime, comparison",documentation," --- ",1.0,"runtime, comparison - --- ",0,runtime comparison img width alt image src img width alt image src ,0 1478,6412426005.0,IssuesEvent,2017-08-08 03:12:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Permissions issue when copying directory,affects_2.1 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `copy` ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Absolutely nothing has changed in my config, but I _did_ upgrade Ansible from 2.0.2 right before the failure began. ##### OS / ENVIRONMENT I'm unning Ansible from MacOS Sierra (a recent upgrade) to bring up an Ubuntu 14.04 server in a Vagrant/Virtualbox environment. ##### SUMMARY I'm trying to copy a directory (and its files) from `/files` to the file system and set appropriate permissions. Ansible seems to think I'm using symbolic permissions when copying a directory. I was running the playbook just fine, but a user was reporting this error and that user was running v2.1.2 so I upgraded. After the upgrade, I was got the issue as well. ##### STEPS TO REPRODUCE As far as I can tell, just run the task below under Ansible 2.1.2. ``` - name: Dotfiles | Install ViM customizations become: yes become_user: ""{{ username }}"" copy: src: .vim dest: ~/ mode: 0664 directory_mode: 0775 force: yes ``` ##### EXPECTED RESULTS The directory should be copied and permissions set as specified. ##### ACTUAL RESULTS I get an error related to symbolic permissions. ``` TASK [user : Dotfiles | Install ViM customizations] **************************** fatal: [default]: FAILED! => {""changed"": false, ""checksum"": ""109d2e70b4a83619eec12768f976177e55168de1"", ""details"": ""bad symbolic permission for mode: 509"", ""failed"": true, ""gid"": 1000, ""group"": ""vagrant"", ""mode"": ""0775"", ""msg"": ""mode must be in octal or symbolic form"", ""owner"": ""vagrant"", ""path"": ""/home/vagrant/.vim"", ""size"": 4096, ""state"": ""directory"", ""uid"": 1000} ``` ",True,"Permissions issue when copying directory - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `copy` ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION Absolutely nothing has changed in my config, but I _did_ upgrade Ansible from 2.0.2 right before the failure began. ##### OS / ENVIRONMENT I'm unning Ansible from MacOS Sierra (a recent upgrade) to bring up an Ubuntu 14.04 server in a Vagrant/Virtualbox environment. ##### SUMMARY I'm trying to copy a directory (and its files) from `/files` to the file system and set appropriate permissions. Ansible seems to think I'm using symbolic permissions when copying a directory. I was running the playbook just fine, but a user was reporting this error and that user was running v2.1.2 so I upgraded. After the upgrade, I was got the issue as well. ##### STEPS TO REPRODUCE As far as I can tell, just run the task below under Ansible 2.1.2. ``` - name: Dotfiles | Install ViM customizations become: yes become_user: ""{{ username }}"" copy: src: .vim dest: ~/ mode: 0664 directory_mode: 0775 force: yes ``` ##### EXPECTED RESULTS The directory should be copied and permissions set as specified. ##### ACTUAL RESULTS I get an error related to symbolic permissions. ``` TASK [user : Dotfiles | Install ViM customizations] **************************** fatal: [default]: FAILED! => {""changed"": false, ""checksum"": ""109d2e70b4a83619eec12768f976177e55168de1"", ""details"": ""bad symbolic permission for mode: 509"", ""failed"": true, ""gid"": 1000, ""group"": ""vagrant"", ""mode"": ""0775"", ""msg"": ""mode must be in octal or symbolic form"", ""owner"": ""vagrant"", ""path"": ""/home/vagrant/.vim"", ""size"": 4096, ""state"": ""directory"", ""uid"": 1000} ``` ",1,permissions issue when copying directory issue type bug report component name copy ansible version ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables absolutely nothing has changed in my config but i did upgrade ansible from right before the failure began os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific i m unning ansible from macos sierra a recent upgrade to bring up an ubuntu server in a vagrant virtualbox environment summary i m trying to copy a directory and its files from files to the file system and set appropriate permissions ansible seems to think i m using symbolic permissions when copying a directory i was running the playbook just fine but a user was reporting this error and that user was running so i upgraded after the upgrade i was got the issue as well steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used as far as i can tell just run the task below under ansible name dotfiles install vim customizations become yes become user username copy src vim dest mode directory mode force yes expected results the directory should be copied and permissions set as specified actual results i get an error related to symbolic permissions task fatal failed changed false checksum details bad symbolic permission for mode failed true gid group vagrant mode msg mode must be in octal or symbolic form owner vagrant path home vagrant vim size state directory uid ,1 261538,8237130011.0,IssuesEvent,2018-09-10 00:41:39,PerfectWeek/web-api,https://api.github.com/repos/PerfectWeek/web-api,closed,Change loggedOnly responses code,Priority: Medium Status: In Progress Type: Maintenance,Change response code when the given token is invalid from `400` to `401`,1.0,Change loggedOnly responses code - Change response code when the given token is invalid from `400` to `401`,0,change loggedonly responses code change response code when the given token is invalid from to ,0 1029,4822882376.0,IssuesEvent,2016-11-06 03:00:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,yum module doesn't validate that RPM could be downloaded before reading,affects_2.3 bug_report in progress waiting_on_maintainer,"##### ISSUE TYPE Bug Report ##### COMPONENT NAME `yum` ##### ANSIBLE VERSION ``` ansible 2.3.0 (type-filter a6feeee50f) last updated 2016/10/28 15:41:12 (GMT -500) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY The `yum` module, in `fetch_rpm_from_url` does not attempt to validate that the request made and returned by `fetch_url` was successful, and instead immediately tried to `read` from that response causing: ``` 'NoneType' object has no attribute 'read' ``` ##### STEPS TO REPRODUCE Try installing an RPM via `yum` that causes `fetch_url` to fail downloading. ##### EXPECTED RESULTS The error message returned from `fetch_url` as part of `info` should be displayed instead of trying to read from None. ##### ACTUAL RESULTS ``` fatal: [haproxy.dev]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [""https://centos7.iuscommunity.org/ius-release.rpm""], ""state"": ""present"", ""update_cache"": false, ""validate_certs"": true}, ""module_name"": ""yum""}, ""msg"": ""Failure downloading https://centos7.iuscommunity.org/ius-release.rpm, 'NoneType' object has no attribute 'read'""} ```",True,"yum module doesn't validate that RPM could be downloaded before reading - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME `yum` ##### ANSIBLE VERSION ``` ansible 2.3.0 (type-filter a6feeee50f) last updated 2016/10/28 15:41:12 (GMT -500) ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY The `yum` module, in `fetch_rpm_from_url` does not attempt to validate that the request made and returned by `fetch_url` was successful, and instead immediately tried to `read` from that response causing: ``` 'NoneType' object has no attribute 'read' ``` ##### STEPS TO REPRODUCE Try installing an RPM via `yum` that causes `fetch_url` to fail downloading. ##### EXPECTED RESULTS The error message returned from `fetch_url` as part of `info` should be displayed instead of trying to read from None. ##### ACTUAL RESULTS ``` fatal: [haproxy.dev]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""conf_file"": null, ""disable_gpg_check"": false, ""disablerepo"": null, ""enablerepo"": null, ""exclude"": null, ""install_repoquery"": true, ""list"": null, ""name"": [""https://centos7.iuscommunity.org/ius-release.rpm""], ""state"": ""present"", ""update_cache"": false, ""validate_certs"": true}, ""module_name"": ""yum""}, ""msg"": ""Failure downloading https://centos7.iuscommunity.org/ius-release.rpm, 'NoneType' object has no attribute 'read'""} ```",1,yum module doesn t validate that rpm could be downloaded before reading issue type bug report component name yum ansible version ansible type filter last updated gmt configuration n a os environment n a summary the yum module in fetch rpm from url does not attempt to validate that the request made and returned by fetch url was successful and instead immediately tried to read from that response causing nonetype object has no attribute read steps to reproduce try installing an rpm via yum that causes fetch url to fail downloading expected results the error message returned from fetch url as part of info should be displayed instead of trying to read from none actual results fatal failed changed false failed true invocation module args conf file null disable gpg check false disablerepo null enablerepo null exclude null install repoquery true list null name state present update cache false validate certs true module name yum msg failure downloading nonetype object has no attribute read ,1 359845,10681675650.0,IssuesEvent,2019-10-22 01:53:31,unoplatform/uno,https://api.github.com/repos/unoplatform/uno,closed,"Folder in solution with name ""Uno"" causes namespace resolution errors",kind/bug kind/consumer-experience priority/backlog triage/needs-information,"I'm trying to add Uno to an existing solution. I ""logically"" created a Uno folder in the solution. I added MyApp.Uno.UPW, MyApp.Uno.iOS and MyApp.Uno.Shared projects with the proper nuget packages and references. When I build the iOS project I had an error like: > The type or namespace name 'UI' does not exist in the namespace 'MyApp.Uno' (are you missing an assembly reference?) I had to rename my folder to UnoApp and projects to MyApp.UnoApp.UWP etc to get it to compile.",1.0,"Folder in solution with name ""Uno"" causes namespace resolution errors - I'm trying to add Uno to an existing solution. I ""logically"" created a Uno folder in the solution. I added MyApp.Uno.UPW, MyApp.Uno.iOS and MyApp.Uno.Shared projects with the proper nuget packages and references. When I build the iOS project I had an error like: > The type or namespace name 'UI' does not exist in the namespace 'MyApp.Uno' (are you missing an assembly reference?) I had to rename my folder to UnoApp and projects to MyApp.UnoApp.UWP etc to get it to compile.",0,folder in solution with name uno causes namespace resolution errors i m trying to add uno to an existing solution i logically created a uno folder in the solution i added myapp uno upw myapp uno ios and myapp uno shared projects with the proper nuget packages and references when i build the ios project i had an error like the type or namespace name ui does not exist in the namespace myapp uno are you missing an assembly reference i had to rename my folder to unoapp and projects to myapp unoapp uwp etc to get it to compile ,0 1789,6575881306.0,IssuesEvent,2017-09-11 17:41:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 badly handles non ec2 instance related limits,affects_2.1 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/ec2.py ##### ANSIBLE VERSION ``` Using: ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides But, I have checked devel branch in this repo and the issue seems still not handled ``` ##### CONFIGURATION ##### OS / ENVIRONMENT GNU/Linux (Ubuntu 14.04.x and 16.04.x) 64-bit. ##### SUMMARY `ec2.py` module does not report well failures coming from **non instance** related **limits** ##### STEPS TO REPRODUCE Just try creating `N` instances with `M` EBS volumes each, making sure to exceed volume related limit. ``` - name: create nodes local_action: module: ec2 params, args count: ``` ##### EXPECTED RESULTS At the least I need the invocation to fail with meaningful error: 1. fail the module invocation, and print error message explaining which limit was actually exceeded. In a perfect world, I would also expect playbook to gracefully fail with ""rollback"": 1. destroy already created nodes 2. print failure message as explained before 3. fail the task ##### ACTUAL RESULTS ``` 20:56:39 TASK [my-aws-bootstrap : create cluster nodes] ******************************* 20:56:39 task path: /var/lib/jenkins/jobs/lab-start/workspace/mypipeline/playbooks/roles/aws-bootstrap/tasks/main.yml:32 20:56:39 ESTABLISH LOCAL CONNECTION FOR USER: jenkins 20:56:39 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python && sleep 0' 21:16:42 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 16, ""count_tag"": null, ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": null, ""group"": [""group1""], ""group_id"": null, ""id"": null, ""image"": ""ami-xxxxxxxx"", ""instance_ids"": null, ""instance_profile_name"": null, ""instance_tags"": {""Environment"": ""pipeline-lab"", ""Name"": ""node"", ""lab_id"": ""lab-100"", ""user"": ""user1""}, ""instance_type"": ""i2.8xlarge"", ""kernel"": null, ""key_name"": ""userkey"", ""monitoring"": true, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""us-east-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": false, ""user_data"": null, ""validate_certs"": true, ""volumes"": [{""delete_on_termination"": true, ""device_name"": ""/dev/xvda"", ""volume_size"": 1000}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdl"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdm"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdn"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""device_name"": ""/dev/xvdd"", ""ephemeral"": ""ephemeral0""}, {""device_name"": ""/dev/xvde"", ""ephemeral"": ""ephemeral1""}, {""device_name"": ""/dev/xvdf"", ""ephemeral"": ""ephemeral2""}, {""device_name"": ""/dev/xvdg"", ""ephemeral"": ""ephemeral3""}, {""device_name"": ""/dev/xvdh"", ""ephemeral"": ""ephemeral4""}, {""device_name"": ""/dev/xvdi"", ""ephemeral"": ""ephemeral5""}, {""device_name"": ""/dev/xvdj"", ""ephemeral"": ""ephemeral6""}, {""device_name"": ""/dev/xvdk"", ""ephemeral"": ""ephemeral7""}], ""vpc_subnet_id"": ""subnet-xxxxxx"", ""wait"": true, ""wait_timeout"": ""1200"", ""zone"": ""us-east-1a""}, ""module_name"": ""ec2""}, ""msg"": ""wait for instances running timeout on Mon Sep 26 21:16:42 2016""} ``` ``` ``` ",True,"ec2 badly handles non ec2 instance related limits - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/ec2.py ##### ANSIBLE VERSION ``` Using: ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides But, I have checked devel branch in this repo and the issue seems still not handled ``` ##### CONFIGURATION ##### OS / ENVIRONMENT GNU/Linux (Ubuntu 14.04.x and 16.04.x) 64-bit. ##### SUMMARY `ec2.py` module does not report well failures coming from **non instance** related **limits** ##### STEPS TO REPRODUCE Just try creating `N` instances with `M` EBS volumes each, making sure to exceed volume related limit. ``` - name: create nodes local_action: module: ec2 params, args count: ``` ##### EXPECTED RESULTS At the least I need the invocation to fail with meaningful error: 1. fail the module invocation, and print error message explaining which limit was actually exceeded. In a perfect world, I would also expect playbook to gracefully fail with ""rollback"": 1. destroy already created nodes 2. print failure message as explained before 3. fail the task ##### ACTUAL RESULTS ``` 20:56:39 TASK [my-aws-bootstrap : create cluster nodes] ******************************* 20:56:39 task path: /var/lib/jenkins/jobs/lab-start/workspace/mypipeline/playbooks/roles/aws-bootstrap/tasks/main.yml:32 20:56:39 ESTABLISH LOCAL CONNECTION FOR USER: jenkins 20:56:39 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python && sleep 0' 21:16:42 fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""assign_public_ip"": false, ""aws_access_key"": null, ""aws_secret_key"": null, ""count"": 16, ""count_tag"": null, ""ebs_optimized"": false, ""ec2_url"": null, ""exact_count"": null, ""group"": [""group1""], ""group_id"": null, ""id"": null, ""image"": ""ami-xxxxxxxx"", ""instance_ids"": null, ""instance_profile_name"": null, ""instance_tags"": {""Environment"": ""pipeline-lab"", ""Name"": ""node"", ""lab_id"": ""lab-100"", ""user"": ""user1""}, ""instance_type"": ""i2.8xlarge"", ""kernel"": null, ""key_name"": ""userkey"", ""monitoring"": true, ""network_interfaces"": null, ""placement_group"": null, ""private_ip"": null, ""profile"": null, ""ramdisk"": null, ""region"": ""us-east-1"", ""security_token"": null, ""source_dest_check"": true, ""spot_launch_group"": null, ""spot_price"": null, ""spot_type"": ""one-time"", ""spot_wait_timeout"": ""600"", ""state"": ""present"", ""tenancy"": ""default"", ""termination_protection"": false, ""user_data"": null, ""validate_certs"": true, ""volumes"": [{""delete_on_termination"": true, ""device_name"": ""/dev/xvda"", ""volume_size"": 1000}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdl"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdm"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""delete_on_termination"": true, ""device_name"": ""/dev/xvdn"", ""device_type"": ""gp2"", ""volume_size"": ""100""}, {""device_name"": ""/dev/xvdd"", ""ephemeral"": ""ephemeral0""}, {""device_name"": ""/dev/xvde"", ""ephemeral"": ""ephemeral1""}, {""device_name"": ""/dev/xvdf"", ""ephemeral"": ""ephemeral2""}, {""device_name"": ""/dev/xvdg"", ""ephemeral"": ""ephemeral3""}, {""device_name"": ""/dev/xvdh"", ""ephemeral"": ""ephemeral4""}, {""device_name"": ""/dev/xvdi"", ""ephemeral"": ""ephemeral5""}, {""device_name"": ""/dev/xvdj"", ""ephemeral"": ""ephemeral6""}, {""device_name"": ""/dev/xvdk"", ""ephemeral"": ""ephemeral7""}], ""vpc_subnet_id"": ""subnet-xxxxxx"", ""wait"": true, ""wait_timeout"": ""1200"", ""zone"": ""us-east-1a""}, ""module_name"": ""ec2""}, ""msg"": ""wait for instances running timeout on Mon Sep 26 21:16:42 2016""} ``` ``` ``` ",1, badly handles non instance related limits issue type bug report component name cloud amazon py ansible version using ansible config file etc ansible ansible cfg configured module search path default w o overrides but i have checked devel branch in this repo and the issue seems still not handled configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific gnu linux ubuntu x and x bit summary py module does not report well failures coming from non instance related limits steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used just try creating n instances with m ebs volumes each making sure to exceed volume related limit name create nodes local action module params args count expected results at the least i need the invocation to fail with meaningful error fail the module invocation and print error message explaining which limit was actually exceeded in a perfect world i would also expect playbook to gracefully fail with rollback destroy already created nodes print failure message as explained before fail the task actual results task task path var lib jenkins jobs lab start workspace mypipeline playbooks roles aws bootstrap tasks main yml establish local connection for user jenkins exec bin sh c lang en us utf lc all en us utf lc messages en us utf python sleep fatal failed changed false failed true invocation module args assign public ip false aws access key null aws secret key null count count tag null ebs optimized false url null exact count null group group id null id null image ami xxxxxxxx instance ids null instance profile name null instance tags environment pipeline lab name node lab id lab user instance type kernel null key name userkey monitoring true network interfaces null placement group null private ip null profile null ramdisk null region us east security token null source dest check true spot launch group null spot price null spot type one time spot wait timeout state present tenancy default termination protection false user data null validate certs true volumes vpc subnet id subnet xxxxxx wait true wait timeout zone us east module name msg wait for instances running timeout on mon sep ,1 218156,16960086212.0,IssuesEvent,2021-06-29 01:41:09,anhdtqwerty/thpt,https://api.github.com/repos/anhdtqwerty/thpt,closed,Major | Quản lý Bộ môn | Thêm Bộ môn | Thêm và hiển thị thành công bộ môn mới bị trùng ,dev-done test-verified,"Thêm Bộ môn mới Step: 1. Click ""Thêm Bộ môn"" 2. Nhập bộ môn có tên bị trùng hoặc có chứa dấu space vị trí đầu/ cuối 3. Bấm ""Lưu"" Actual: Thêm và hiển thị thành công bộ môn mới có tên bị trùng hoặc chứa space đầu/ cuối Expect: Thêm mới không thành công Thông báo ""Bộ môn đã tồn tại"" ",1.0,"Major | Quản lý Bộ môn | Thêm Bộ môn | Thêm và hiển thị thành công bộ môn mới bị trùng - Thêm Bộ môn mới Step: 1. Click ""Thêm Bộ môn"" 2. Nhập bộ môn có tên bị trùng hoặc có chứa dấu space vị trí đầu/ cuối 3. Bấm ""Lưu"" Actual: Thêm và hiển thị thành công bộ môn mới có tên bị trùng hoặc chứa space đầu/ cuối Expect: Thêm mới không thành công Thông báo ""Bộ môn đã tồn tại"" ",0,major quản lý bộ môn thêm bộ môn thêm và hiển thị thành công bộ môn mới bị trùng thêm bộ môn mới step click thêm bộ môn nhập bộ môn có tên bị trùng hoặc có chứa dấu space vị trí đầu cuối bấm lưu actual thêm và hiển thị thành công bộ môn mới có tên bị trùng hoặc chứa space đầu cuối expect thêm mới không thành công thông báo bộ môn đã tồn tại img width alt src img width alt src img width alt src ,0 202992,15326516271.0,IssuesEvent,2021-02-26 03:48:11,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,opened,[Failing Test] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance],kind/failing-test," #### Which jobs are failing: `ci-kubernetes-gce-conformance-latest-kubetest2` #### Which test(s) are failing: `Kubernetes e2e suite.[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] ` #### Since when has it been failing: Fairly new job, but it's the only Conformance test which is failing consistently. #### Testgrid link: https://testgrid.k8s.io/conformance-all#Conformance%20-%20GCE%20-%20master%20-%20kubetest2 #### Reason for failure: e.g. https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-gce-conformance-latest-kubetest2/1365061586389045248 ``` Feb 25 23:28:50.753: INFO: Pod logs: 2021/02/25 23:28:19 OK: Got token 2021/02/25 23:28:19 OK: got issuer https://kubernetes.io/kubetest2 2021/02/25 23:28:19 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:""https://kubernetes.io/kubetest2"", Subject:""system:serviceaccount:svcaccounts-5085:default"", Audience:jwt.Audience{""oidc-discovery-test""}, Expiry:1614296299, NotBefore:1614295699, IssuedAt:1614295699, ID:""""}, Kubernetes:openidmetadata.kubeClaims{Namespace:""svcaccounts-5085"", ServiceAccount:openidmetadata.kubeName{Name:""default"", UID:""de16732d-7e39-46b8-8d7d-44bde84f9a91""}}} 2021/02/25 23:28:19 Get ""https://kubernetes.io/kubetest2/.well-known/openid-configuration"": x509: certificate signed by unknown authority ``` #### Anything else we need to know: This is a new job as part of migration to kubetest2 effort: https://github.com/kubernetes/enhancements/issues/2464 Comparing it to: https://testgrid.k8s.io/conformance-all#conformance,%20master%20(dev) e.g https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-conformance/1365089519384137728 ``` Feb 26 00:43:27.144: INFO: Pod logs: 2021/02/26 00:42:55 OK: Got token 2021/02/26 00:42:55 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/02/26 00:42:55 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:""https://kubernetes.default.svc.cluster.local"", Subject:""system:serviceaccount:svcaccounts-5425:default"", Audience:jwt.Audience{""oidc-discovery-test""}, Expiry:1614300775, NotBefore:1614300175, IssuedAt:1614300175, ID:""""}, Kubernetes:openidmetadata.kubeClaims{Namespace:""svcaccounts-5425"", ServiceAccount:openidmetadata.kubeName{Name:""default"", UID:""2591919d-84ec-42a0-9976-80803df8dedd""}}} 2021/02/26 00:42:55 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/02/26 00:42:55 OK: Validated signature on JWT 2021/02/26 00:42:55 OK: Got valid claims from token! 2021/02/26 00:42:55 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:""https://kubernetes.default.svc.cluster.local"", Subject:""system:serviceaccount:svcaccounts-5425:default"", Audience:jwt.Audience{""oidc-discovery-test""}, Expiry:1614300775, NotBefore:1614300175, IssuedAt:1614300175, ID:""""}, Kubernetes:openidmetadata.kubeClaims{Namespace:""svcaccounts-5425"", ServiceAccount:openidmetadata.kubeName{Name:""default"", UID:""2591919d-84ec-42a0-9976-80803df8dedd""}}} ``` The issuer seems to differ `https://kubernetes.io/kubetest2` vs `https://kubernetes.default.svc.cluster.local` would be good to know why/where it's coming from, especially to know if there's a discrepancy/inherent assumptions in how the cluster needs to be created. This doesn't seem to be related to another issue regarding the same test: https://github.com/kubernetes/kubernetes/issues/99470 /cc @BenTheElder @spiffxp @kubernetes/sig-auth-bugs ",1.0,"[Failing Test] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] - #### Which jobs are failing: `ci-kubernetes-gce-conformance-latest-kubetest2` #### Which test(s) are failing: `Kubernetes e2e suite.[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] ` #### Since when has it been failing: Fairly new job, but it's the only Conformance test which is failing consistently. #### Testgrid link: https://testgrid.k8s.io/conformance-all#Conformance%20-%20GCE%20-%20master%20-%20kubetest2 #### Reason for failure: e.g. https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-gce-conformance-latest-kubetest2/1365061586389045248 ``` Feb 25 23:28:50.753: INFO: Pod logs: 2021/02/25 23:28:19 OK: Got token 2021/02/25 23:28:19 OK: got issuer https://kubernetes.io/kubetest2 2021/02/25 23:28:19 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:""https://kubernetes.io/kubetest2"", Subject:""system:serviceaccount:svcaccounts-5085:default"", Audience:jwt.Audience{""oidc-discovery-test""}, Expiry:1614296299, NotBefore:1614295699, IssuedAt:1614295699, ID:""""}, Kubernetes:openidmetadata.kubeClaims{Namespace:""svcaccounts-5085"", ServiceAccount:openidmetadata.kubeName{Name:""default"", UID:""de16732d-7e39-46b8-8d7d-44bde84f9a91""}}} 2021/02/25 23:28:19 Get ""https://kubernetes.io/kubetest2/.well-known/openid-configuration"": x509: certificate signed by unknown authority ``` #### Anything else we need to know: This is a new job as part of migration to kubetest2 effort: https://github.com/kubernetes/enhancements/issues/2464 Comparing it to: https://testgrid.k8s.io/conformance-all#conformance,%20master%20(dev) e.g https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-conformance/1365089519384137728 ``` Feb 26 00:43:27.144: INFO: Pod logs: 2021/02/26 00:42:55 OK: Got token 2021/02/26 00:42:55 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/02/26 00:42:55 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:""https://kubernetes.default.svc.cluster.local"", Subject:""system:serviceaccount:svcaccounts-5425:default"", Audience:jwt.Audience{""oidc-discovery-test""}, Expiry:1614300775, NotBefore:1614300175, IssuedAt:1614300175, ID:""""}, Kubernetes:openidmetadata.kubeClaims{Namespace:""svcaccounts-5425"", ServiceAccount:openidmetadata.kubeName{Name:""default"", UID:""2591919d-84ec-42a0-9976-80803df8dedd""}}} 2021/02/26 00:42:55 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/02/26 00:42:55 OK: Validated signature on JWT 2021/02/26 00:42:55 OK: Got valid claims from token! 2021/02/26 00:42:55 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:""https://kubernetes.default.svc.cluster.local"", Subject:""system:serviceaccount:svcaccounts-5425:default"", Audience:jwt.Audience{""oidc-discovery-test""}, Expiry:1614300775, NotBefore:1614300175, IssuedAt:1614300175, ID:""""}, Kubernetes:openidmetadata.kubeClaims{Namespace:""svcaccounts-5425"", ServiceAccount:openidmetadata.kubeName{Name:""default"", UID:""2591919d-84ec-42a0-9976-80803df8dedd""}}} ``` The issuer seems to differ `https://kubernetes.io/kubetest2` vs `https://kubernetes.default.svc.cluster.local` would be good to know why/where it's coming from, especially to know if there's a discrepancy/inherent assumptions in how the cluster needs to be created. This doesn't seem to be related to another issue regarding the same test: https://github.com/kubernetes/kubernetes/issues/99470 /cc @BenTheElder @spiffxp @kubernetes/sig-auth-bugs ",0, serviceaccounts serviceaccountissuerdiscovery should support oidc discovery of service account issuer which jobs are failing ci kubernetes gce conformance latest which test s are failing kubernetes suite serviceaccounts serviceaccountissuerdiscovery should support oidc discovery of service account issuer since when has it been failing fairly new job but it s the only conformance test which is failing consistently testgrid link reason for failure e g feb info pod logs ok got token ok got issuer full not validated claims openidmetadata claims claims jwt claims issuer subject system serviceaccount svcaccounts default audience jwt audience oidc discovery test expiry notbefore issuedat id kubernetes openidmetadata kubeclaims namespace svcaccounts serviceaccount openidmetadata kubename name default uid get certificate signed by unknown authority anything else we need to know this is a new job as part of migration to effort comparing it to e g feb info pod logs ok got token ok got issuer full not validated claims openidmetadata claims claims jwt claims issuer subject system serviceaccount svcaccounts default audience jwt audience oidc discovery test expiry notbefore issuedat id kubernetes openidmetadata kubeclaims namespace svcaccounts serviceaccount openidmetadata kubename name default uid ok constructed oidc provider for issuer ok validated signature on jwt ok got valid claims from token full validated claims openidmetadata claims claims jwt claims issuer subject system serviceaccount svcaccounts default audience jwt audience oidc discovery test expiry notbefore issuedat id kubernetes openidmetadata kubeclaims namespace svcaccounts serviceaccount openidmetadata kubename name default uid the issuer seems to differ vs would be good to know why where it s coming from especially to know if there s a discrepancy inherent assumptions in how the cluster needs to be created this doesn t seem to be related to another issue regarding the same test cc bentheelder spiffxp kubernetes sig auth bugs ,0 865,4534587162.0,IssuesEvent,2016-09-08 15:00:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apache2_module fails for php7.0 on Ubuntu Xenial,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 982db58aff) last updated 2016/09/08 11:50:49 (GMT +100) lib/ansible/modules/core: (detached HEAD db38f0c876) last updated 2016/09/08 13:03:40 (GMT +100) lib/ansible/modules/extras: (detached HEAD 8bfdcfcab2) last updated 2016/09/08 11:51:00 (GMT +100) config file = /home/rowan/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ubuntu Xenial lists the php7.0 module as php7_module when running apache2ctl -M this breaks the regexp checking if the module is enabled. I've made a work around here https://github.com/rwky/ansible-modules-core/commit/00ad6ef035a10dac7c84b7b68f04b00a739b104b but I didn't make a PR since I expect it may break other distros/versions. Not entirely sure what the best solution to this is. ##### STEPS TO REPRODUCE Run apache2_module with name=php7.0 state=present on a xenial server. ",True,"apache2_module fails for php7.0 on Ubuntu Xenial - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 982db58aff) last updated 2016/09/08 11:50:49 (GMT +100) lib/ansible/modules/core: (detached HEAD db38f0c876) last updated 2016/09/08 13:03:40 (GMT +100) lib/ansible/modules/extras: (detached HEAD 8bfdcfcab2) last updated 2016/09/08 11:51:00 (GMT +100) config file = /home/rowan/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Ubuntu Xenial lists the php7.0 module as php7_module when running apache2ctl -M this breaks the regexp checking if the module is enabled. I've made a work around here https://github.com/rwky/ansible-modules-core/commit/00ad6ef035a10dac7c84b7b68f04b00a739b104b but I didn't make a PR since I expect it may break other distros/versions. Not entirely sure what the best solution to this is. ##### STEPS TO REPRODUCE Run apache2_module with name=php7.0 state=present on a xenial server. ",1, module fails for on ubuntu xenial issue type bug report component name module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home rowan ansible cfg configured module search path default w o overrides configuration n a os environment n a summary ubuntu xenial lists the module as module when running m this breaks the regexp checking if the module is enabled i ve made a work around here but i didn t make a pr since i expect it may break other distros versions not entirely sure what the best solution to this is steps to reproduce run module with name state present on a xenial server ,1 289257,21776593189.0,IssuesEvent,2022-05-13 14:21:37,JuliaRobotics/RoME.jl,https://api.github.com/repos/JuliaRobotics/RoME.jl,closed,update Pose2Pose2 TYPEDEF docs with Manifolds math,enhancement documentation manifolds user experience,"New CJL Docs [Using Manifolds.jl page has a link to the Pose2Pose2 as easy example](https://juliarobotics.org/Caesar.jl/latest/concepts/using_manifolds/#Using-Manifolds-in-Factors) where and how the manifold math is used. So there should be a math description to look at when you click through to those docs. This one: - https://juliarobotics.org/Caesar.jl/latest/concepts/available_varfacs/#RoME.Pose2Pose2",1.0,"update Pose2Pose2 TYPEDEF docs with Manifolds math - New CJL Docs [Using Manifolds.jl page has a link to the Pose2Pose2 as easy example](https://juliarobotics.org/Caesar.jl/latest/concepts/using_manifolds/#Using-Manifolds-in-Factors) where and how the manifold math is used. So there should be a math description to look at when you click through to those docs. This one: - https://juliarobotics.org/Caesar.jl/latest/concepts/available_varfacs/#RoME.Pose2Pose2",0,update typedef docs with manifolds math new cjl docs where and how the manifold math is used so there should be a math description to look at when you click through to those docs this one ,0 1707,6574435760.0,IssuesEvent,2017-09-11 12:53:32,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible 2.2: docker_service 'Error starting project - ',affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /home/jhoeve-a/GitCollections/authdns-ansible-code/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION In version 2.2 I use: ``` - name: starting bind9 in docker docker_service: pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` In version 2.1.2 I used (due to lack of pull statement): ``` - name: pull image bind9 docker_image: name: docker.solvinity.net/bind9 pull: yes force: yes tags: - docker - name: starting bind9 in docker docker_service: # pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` ##### OS / ENVIRONMENT Ubuntu 16.04 / Ansible 2.2 from ppa ##### SUMMARY It used to work in 2.1.2 but now fails. ##### STEPS TO REPRODUCE Simply update to Ansible 2.2 and run playbook with snippet above. ##### EXPECTED RESULTS Upgrade / Start docker container ##### ACTUAL RESULTS ``` fatal: [lnx2346vm.internal.asp4all.nl]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": { ""services"": { ""bind9"": { ""image"": ""docker.solvinity.net/bind9"", ""logging"": { ""driver"": ""syslog"", ""options"": { ""syslog-facility"": ""local6"", ""tag"": ""bind9"" } }, ""network_mode"": ""host"", ""restart"": ""always"", ""volumes"": [ ""/export/bind/chroot/usr/local/bind/data:/usr/local/bind/data"", ""/export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc"" ] } }, ""version"": ""2"" }, ""dependencies"": true, ""docker_host"": null, ""files"": null, ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": ""bind9"", ""project_src"": null, ""pull"": true, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": false, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 120, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Error starting project - "" } ```",True,"ansible 2.2: docker_service 'Error starting project - ' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /home/jhoeve-a/GitCollections/authdns-ansible-code/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION In version 2.2 I use: ``` - name: starting bind9 in docker docker_service: pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` In version 2.1.2 I used (due to lack of pull statement): ``` - name: pull image bind9 docker_image: name: docker.solvinity.net/bind9 pull: yes force: yes tags: - docker - name: starting bind9 in docker docker_service: # pull: yes project_name: bind9 timeout: 120 definition: version: '2' services: bind9: restart: always logging: driver: syslog options: syslog-facility: local6 tag: bind9 image: docker.solvinity.net/bind9 network_mode: host volumes: - /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data - /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc register: output tags: - docker ``` ##### OS / ENVIRONMENT Ubuntu 16.04 / Ansible 2.2 from ppa ##### SUMMARY It used to work in 2.1.2 but now fails. ##### STEPS TO REPRODUCE Simply update to Ansible 2.2 and run playbook with snippet above. ##### EXPECTED RESULTS Upgrade / Start docker container ##### ACTUAL RESULTS ``` fatal: [lnx2346vm.internal.asp4all.nl]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_args"": { ""api_version"": null, ""build"": false, ""cacert_path"": null, ""cert_path"": null, ""debug"": false, ""definition"": { ""services"": { ""bind9"": { ""image"": ""docker.solvinity.net/bind9"", ""logging"": { ""driver"": ""syslog"", ""options"": { ""syslog-facility"": ""local6"", ""tag"": ""bind9"" } }, ""network_mode"": ""host"", ""restart"": ""always"", ""volumes"": [ ""/export/bind/chroot/usr/local/bind/data:/usr/local/bind/data"", ""/export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc"" ] } }, ""version"": ""2"" }, ""dependencies"": true, ""docker_host"": null, ""files"": null, ""filter_logger"": false, ""hostname_check"": false, ""key_path"": null, ""nocache"": false, ""project_name"": ""bind9"", ""project_src"": null, ""pull"": true, ""recreate"": ""smart"", ""remove_images"": null, ""remove_orphans"": false, ""remove_volumes"": false, ""restarted"": false, ""scale"": null, ""services"": null, ""ssl_version"": null, ""state"": ""present"", ""stopped"": false, ""timeout"": 120, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null }, ""module_name"": ""docker_service"" }, ""msg"": ""Error starting project - "" } ```",1,ansible docker service error starting project issue type bug report component name docker service ansible version ansible config file home jhoeve a gitcollections authdns ansible code ansible cfg configured module search path default w o overrides configuration in version i use name starting in docker docker service pull yes project name timeout definition version services restart always logging driver syslog options syslog facility tag image docker solvinity net network mode host volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc register output tags docker in version i used due to lack of pull statement name pull image docker image name docker solvinity net pull yes force yes tags docker name starting in docker docker service pull yes project name timeout definition version services restart always logging driver syslog options syslog facility tag image docker solvinity net network mode host volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc register output tags docker os environment ubuntu ansible from ppa summary it used to work in but now fails steps to reproduce simply update to ansible and run playbook with snippet above expected results upgrade start docker container actual results fatal failed changed false failed true invocation module args api version null build false cacert path null cert path null debug false definition services image docker solvinity net logging driver syslog options syslog facility tag network mode host restart always volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc version dependencies true docker host null files null filter logger false hostname check false key path null nocache false project name project src null pull true recreate smart remove images null remove orphans false remove volumes false restarted false scale null services null ssl version null state present stopped false timeout tls null tls hostname null tls verify null module name docker service msg error starting project ,1 42455,5445559818.0,IssuesEvent,2017-03-07 08:02:00,bounswe/bounswe2017group6,https://api.github.com/repos/bounswe/bounswe2017group6,closed,Creating new mock-up for Admin Control Panel (web),design,"The mock-up for Admin Control Panel should be redone with respect to newly agreed-upon style choices, and with new requirements.",1.0,"Creating new mock-up for Admin Control Panel (web) - The mock-up for Admin Control Panel should be redone with respect to newly agreed-upon style choices, and with new requirements.",0,creating new mock up for admin control panel web the mock up for admin control panel should be redone with respect to newly agreed upon style choices and with new requirements ,0 1874,6577499684.0,IssuesEvent,2017-09-12 01:20:34,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker module has wrong/misleading error handling during container creation,affects_2.0 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` and current devel ##### CONFIGURATION standard ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When docker module fails to start a container for any reason it tries to fix it by pulling the image from hub and start again. But if the image is local, module will fail with pull error instead of actual start error. ##### STEPS TO REPRODUCE 1. Create local image with `docker build` or `docker commit` and tag it as `my/test:latest` 2. Try to start container giving wrong network name: ``` docker: name: 'test' image: 'my/test:latest' state: 'restarted' net: 'bad_network_name' ``` ##### EXPECTED RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: network bad_network_name not found""} ``` ##### ACTUAL RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""changes"": [""{\""status\"":\""Pulling repository docker.io/my/test\""}\r\n"", ""{\""errorDetail\"":{\""message\"":\""Error: image my/test not found\""},\""error\"":\""Error: image my/test not found\""}\r\n""], ""failed"": true, ""msg"": ""Unrecognized status from pull."", ""status"": """"} ``` [Part of code in question](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker.py#L1658) – try/except block with do_create. Why do we try to pull the image if we get 404 response code? As per [docker api docs](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22) response codes for `containers/create` endpoint are too general to make such decisions: - 201 – no error - 404 – no such container - 406 – impossible to attach (container not running) - 500 – server error ",True,"docker module has wrong/misleading error handling during container creation - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker module ##### ANSIBLE VERSION ``` ansible 2.0.0.2 ``` and current devel ##### CONFIGURATION standard ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When docker module fails to start a container for any reason it tries to fix it by pulling the image from hub and start again. But if the image is local, module will fail with pull error instead of actual start error. ##### STEPS TO REPRODUCE 1. Create local image with `docker build` or `docker commit` and tag it as `my/test:latest` 2. Try to start container giving wrong network name: ``` docker: name: 'test' image: 'my/test:latest' state: 'restarted' net: 'bad_network_name' ``` ##### EXPECTED RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Docker API Error: network bad_network_name not found""} ``` ##### ACTUAL RESULTS ``` fatal: [test-host]: FAILED! => {""changed"": false, ""changes"": [""{\""status\"":\""Pulling repository docker.io/my/test\""}\r\n"", ""{\""errorDetail\"":{\""message\"":\""Error: image my/test not found\""},\""error\"":\""Error: image my/test not found\""}\r\n""], ""failed"": true, ""msg"": ""Unrecognized status from pull."", ""status"": """"} ``` [Part of code in question](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker.py#L1658) – try/except block with do_create. Why do we try to pull the image if we get 404 response code? As per [docker api docs](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22) response codes for `containers/create` endpoint are too general to make such decisions: - 201 – no error - 404 – no such container - 406 – impossible to attach (container not running) - 500 – server error ",1,docker module has wrong misleading error handling during container creation issue type bug report component name docker module ansible version ansible and current devel configuration standard os environment ubuntu summary when docker module fails to start a container for any reason it tries to fix it by pulling the image from hub and start again but if the image is local module will fail with pull error instead of actual start error steps to reproduce create local image with docker build or docker commit and tag it as my test latest try to start container giving wrong network name docker name test image my test latest state restarted net bad network name expected results fatal failed changed false failed true msg docker api error network bad network name not found actual results fatal failed changed false changes failed true msg unrecognized status from pull status – try except block with do create why do we try to pull the image if we get response code as per response codes for containers create endpoint are too general to make such decisions – no error – no such container – impossible to attach container not running – server error ,1 1496,6478927151.0,IssuesEvent,2017-08-18 09:15:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure_rm_virualmashine issue,affects_2.1 azure bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME azure_rm_virtualmachine module ##### ANSIBLE VERSION ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT fedora 23 ##### SUMMARY ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` ##### EXPECTED RESULTS creatiion of VM. ##### ACTUAL RESULTS PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" && echo ansible-tmp-1470326423.51-208881287834045=""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in **init** for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in **init**\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",True,"azure_rm_virualmashine issue - ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME azure_rm_virtualmachine module ##### ANSIBLE VERSION ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT fedora 23 ##### SUMMARY ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` ##### EXPECTED RESULTS creatiion of VM. ##### ACTUAL RESULTS PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" && echo ansible-tmp-1470326423.51-208881287834045=""`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in **init** for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in **init**\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" && echo ansible-tmp-1470326423.51-208881287834045=""` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `"" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf ""/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/"" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1284, in main() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 1281, in main AzureRMVirtualMachine() File ""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py"", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""azure_rm_virtualmachine""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1284, in \n main()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 1281, in main\n AzureRMVirtualMachine()\n File \""/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\"", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ``` ",1,azure rm virualmashine issue issue type bug report feature idea documentation report component name azure rm virtualmachine module ansible version ansible noarch configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables python modules azure os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific fedora summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost connection local gather facts false become false vars files environments azure azure credentials encrypted yml inventory environments azure azure credentials encrypted temp passwd yml vars roles create azure vm and roles create azure vm main yml name create vm with defaults azure rm virtualmachine resource group testing name admin username test user admin password test vm image offer centos publisher openlogic sku version latest expected results creatiion of vm actual results playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed ,1 1086,4934170651.0,IssuesEvent,2016-11-28 18:18:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"copy: dest=filename fails with ""Destination directory does not exist""",affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ##### SUMMARY When I try to copy a file into the current working directory on the remote end, Ansible fails because it tries to check for the existence of the (empty string) directory. ##### STEPS TO REPRODUCE ``` $ ansible localhost -m copy -a ""src=/dev/null dest=file.txt"" ``` ##### EXPECTED RESULTS I expect an empty file `file.txt` to show up in my home directory ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""checksum"": ""da39a3ee5e6b4b0d3255bfef95601890afd80709"", ""failed"": true, ""msg"": ""Destination directory does not exist"" } ``` ",True,"copy: dest=filename fails with ""Destination directory does not exist"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME copy ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Linux ##### SUMMARY When I try to copy a file into the current working directory on the remote end, Ansible fails because it tries to check for the existence of the (empty string) directory. ##### STEPS TO REPRODUCE ``` $ ansible localhost -m copy -a ""src=/dev/null dest=file.txt"" ``` ##### EXPECTED RESULTS I expect an empty file `file.txt` to show up in my home directory ##### ACTUAL RESULTS ``` localhost | FAILED! => { ""changed"": false, ""checksum"": ""da39a3ee5e6b4b0d3255bfef95601890afd80709"", ""failed"": true, ""msg"": ""Destination directory does not exist"" } ``` ",1,copy dest filename fails with destination directory does not exist issue type bug report component name copy ansible version ansible config file configured module search path default w o overrides configuration os environment linux summary when i try to copy a file into the current working directory on the remote end ansible fails because it tries to check for the existence of the empty string directory steps to reproduce ansible localhost m copy a src dev null dest file txt expected results i expect an empty file file txt to show up in my home directory actual results localhost failed changed false checksum failed true msg destination directory does not exist ,1 1882,6577511061.0,IssuesEvent,2017-09-12 01:25:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Make add_host less verbose,affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME add_host ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] remote_tmp = $HOME/.ansible/tmp roles_path = /etc/ansible/roles inventory = inventory host_key_checking = False ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S jinja2_extensions = jinja2.ext.do [privilege_escalation] become = True become_method = sudo become_user = root become_ask_pass = False [paramiko_connection] [ssh_connection] pipelining = True scp_if_ssh = True ssh_args = -F ssh_config [accelerate] [selinux] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When running `add_host` I get a ton of output on my shell. I don't see any reasons for this verbose output. ##### STEPS TO REPRODUCE ``` add_host: name: foobar ``` ##### EXPECTED RESULTS Not a ton of output ##### ACTUAL RESULTS This is the output... Without -vvv for a single server on OpenStack > ok: [localhost] => (item={'_ansible_no_log': False, u'changed': False, u'server': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, 'item': u'singlebox', 'invocation': {'module_name': u'os_server', u'module_args': {u'auth_type': None, u'availability_zone': None, u'image': u'Ubuntu 14.04 foo-cloudimg amd64', u'image_exclude': u'(deprecated)', u'flavor_include': None, u'meta': None, u'flavor': u'm1.micro', u'security_groups': [u'default', u'default'], u'boot_from_volume': False, u'userdata': u'#cloud-config\nsystem_info:\n default_user:\n name: foostaff\n home: /home/foostaff\n shell: /bin/bash\n lock_passwd: True\n gecos: foostaff\n sudo: [""ALL=(ALL) NOPASSWD:ALL""]\nruncmd:\n - [ mkdir, -p, ""/home/foostaff/.ssh"" ]\n - ""wget \'https://gitlab.foo.de/security/foostaff-keys/raw/master/authorized_keys\' -O - > /home/foostaff/.ssh/authorized_keys -q -t 5 -T 300""\n - [ chmod, 700, ""/home/foostaff/.ssh"" ]\n - [ chmod, 600, ""/home/foostaff/.ssh/authorized_keys"" ]\n - [ chown, -R, foostaff, ""/home/foostaff/.ssh/"" ]\n', u'network': None, u'nics': [{u'net-name': u'internal'}], u'floating_ips': None, u'flavor_ram': None, u'volume_size': False, u'state': u'present', u'auto_ip': True, u'cloud': None, u'floating_ip_pools': [u'float1'], u'region_name': None, u'key_name': u'username', u'api_timeout': None, u'auth': None, u'endpoint_type': u'public', u'boot_volume': None, u'key': None, u'cacert': None, u'terminate_volume': False, u'wait': True, u'name': u'singlebox', u'timeout': 180, u'cert': None, u'volumes': [], u'verify': True, u'config_drive': False}}, u'openstack': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35'}) ",True,"Make add_host less verbose - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME add_host ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] remote_tmp = $HOME/.ansible/tmp roles_path = /etc/ansible/roles inventory = inventory host_key_checking = False ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S jinja2_extensions = jinja2.ext.do [privilege_escalation] become = True become_method = sudo become_user = root become_ask_pass = False [paramiko_connection] [ssh_connection] pipelining = True scp_if_ssh = True ssh_args = -F ssh_config [accelerate] [selinux] ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY When running `add_host` I get a ton of output on my shell. I don't see any reasons for this verbose output. ##### STEPS TO REPRODUCE ``` add_host: name: foobar ``` ##### EXPECTED RESULTS Not a ton of output ##### ACTUAL RESULTS This is the output... Without -vvv for a single server on OpenStack > ok: [localhost] => (item={'_ansible_no_log': False, u'changed': False, u'server': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, 'item': u'singlebox', 'invocation': {'module_name': u'os_server', u'module_args': {u'auth_type': None, u'availability_zone': None, u'image': u'Ubuntu 14.04 foo-cloudimg amd64', u'image_exclude': u'(deprecated)', u'flavor_include': None, u'meta': None, u'flavor': u'm1.micro', u'security_groups': [u'default', u'default'], u'boot_from_volume': False, u'userdata': u'#cloud-config\nsystem_info:\n default_user:\n name: foostaff\n home: /home/foostaff\n shell: /bin/bash\n lock_passwd: True\n gecos: foostaff\n sudo: [""ALL=(ALL) NOPASSWD:ALL""]\nruncmd:\n - [ mkdir, -p, ""/home/foostaff/.ssh"" ]\n - ""wget \'https://gitlab.foo.de/security/foostaff-keys/raw/master/authorized_keys\' -O - > /home/foostaff/.ssh/authorized_keys -q -t 5 -T 300""\n - [ chmod, 700, ""/home/foostaff/.ssh"" ]\n - [ chmod, 600, ""/home/foostaff/.ssh/authorized_keys"" ]\n - [ chown, -R, foostaff, ""/home/foostaff/.ssh/"" ]\n', u'network': None, u'nics': [{u'net-name': u'internal'}], u'floating_ips': None, u'flavor_ram': None, u'volume_size': False, u'state': u'present', u'auto_ip': True, u'cloud': None, u'floating_ip_pools': [u'float1'], u'region_name': None, u'key_name': u'username', u'api_timeout': None, u'auth': None, u'endpoint_type': u'public', u'boot_volume': None, u'key': None, u'cacert': None, u'terminate_volume': False, u'wait': True, u'name': u'singlebox', u'timeout': 180, u'cert': None, u'volumes': [], u'verify': True, u'config_drive': False}}, u'openstack': {u'OS-EXT-STS:task_state': None, u'addresses': {u'internal': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.9', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:d3:6e:0e', u'version': 4, u'addr': u'192.168.0.22', u'OS-EXT-IPS:type': u'floating'}]}, u'image': {u'id': u'd4711bae-b30e-4e32-a4dd-64010a01e104'}, u'OS-EXT-STS:vm_state': u'active', u'OS-SRV-USG:launched_at': u'2016-03-24T14:55:58.000000', u'NAME_ATTR': u'name', u'flavor': {u'id': u'ba1dc475-4f14-4e46-b601-ab43b775e4b5', u'name': u'm1.micro'}, u'az': u'nova', u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35', u'security_groups': [{u'description': u'Default policy which allows all outgoing and incomming only SSH from foo jumphosts', u'id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'security_group_rules': [{u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.5/25', u'id': u'0a7cd664-0896-40bd-b98e-20a6d25dc4e6'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.0.0.10/24', u'id': u'18326637-7af7-4db1-a575-3c474a8506b8'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'1b8c5e01-c739-46b1-bdeb-e4e46460ee54'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.10/32', u'id': u'1c33a398-12ee-4a85-b70c-176ee3cd627a'}, {u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': u'0.0.0.0/0', u'id': u'cd43952e-cbeb-4b07-86c5-a357cbf0fab4'}, {u'direction': u'ingress', u'protocol': None, u'ethertype': u'IPv4', u'port_range_max': None, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': None, u'remote_ip_prefix': None, u'id': u'd50c2cd0-9ae9-4a1b-b8d9-e8880ad4bc52'}, {u'direction': u'ingress', u'protocol': u'tcp', u'ethertype': u'IPv4', u'port_range_max': 22, u'security_group_id': u'f5f8560d-b674-41ed-84b9-8d04dae79000', u'port_range_min': 22, u'remote_ip_prefix': u'10.100.0.15/32', u'id': u'e8099b94-603f-4602-bb57-2f678e1a8a22'}], u'name': u'default'}], u'user_id': u'eaa1c24248ef4c9ab7dd87b7f2a96572', u'OS-DCF:diskConfig': u'MANUAL', u'networks': {u'internal': [u'192.168.0.9', u'192.168.0.22']}, u'accessIPv4': u'192.168.0.22', u'accessIPv6': u'', u'cloud': u'envvars', u'key_name': u'username', u'progress': 0, u'OS-EXT-STS:power_state': 1, u'interface_ip': u'192.168.0.22', u'config_drive': u'', u'status': u'ACTIVE', u'updated': u'2016-03-24T14:55:58Z', u'hostId': u'd3d17c9a8b6b19ccda574e8418ca98da23682e0f1f4398a122a96088', u'HUMAN_ID': True, u'OS-SRV-USG:terminated_at': None, u'public_v4': u'192.168.0.22', u'public_v6': u'', u'private_v4': u'192.168.0.9', u'OS-EXT-AZ:availability_zone': u'nova', u'name': u'singlebox', u'created': u'2016-03-24T14:55:53Z', u'tenant_id': u'35f7725e44794773ae17d9ad18a4dd23', u'region': u'RegionOne', u'os-extended-volumes:volumes_attached': [], u'volumes': [], u'metadata': {}, u'human_id': u'singlebox'}, u'id': u'637f46be-0b6c-494e-b75f-b4172c60db35'}) ",1,make add host less verbose issue type feature idea component name add host ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration remote tmp home ansible tmp roles path etc ansible roles inventory inventory host key checking false ansible managed ansible managed file modified on y m d h m s extensions ext do become true become method sudo become user root become ask pass false pipelining true scp if ssh true ssh args f ssh config os environment n a summary when running add host i get a ton of output on my shell i don t see any reasons for this verbose output steps to reproduce add host name foobar expected results not a ton of output actual results this is the output without vvv for a single server on openstack ok item ansible no log false u changed false u server u os ext sts task state none u addresses u internal u image u id u u os ext sts vm state u active u os srv usg launched at u u name attr u name u flavor u id u u name u micro u az u nova u id u u security groups u name u default u user id u u os dcf diskconfig u manual u networks u internal u u u u u cloud u envvars u key name u username u progress u os ext sts power state u interface ip u u config drive u u status u active u updated u u hostid u u human id true u os srv usg terminated at none u public u u public u u private u u os ext az availability zone u nova u name u singlebox u created u u tenant id u u region u regionone u os extended volumes volumes attached u volumes u metadata u human id u singlebox item u singlebox invocation module name u os server u module args u auth type none u availability zone none u image u ubuntu foo cloudimg u image exclude u deprecated u flavor include none u meta none u flavor u micro u security groups u boot from volume false u userdata u cloud config nsystem info n default user n name foostaff n home home foostaff n shell bin bash n lock passwd true n gecos foostaff n sudo nruncmd n n wget o home foostaff ssh authorized keys q t t n n n n u network none u nics u floating ips none u flavor ram none u volume size false u state u present u auto ip true u cloud none u floating ip pools u region name none u key name u username u api timeout none u auth none u endpoint type u public u boot volume none u key none u cacert none u terminate volume false u wait true u name u singlebox u timeout u cert none u volumes u verify true u config drive false u openstack u os ext sts task state none u addresses u internal u image u id u u os ext sts vm state u active u os srv usg launched at u u name attr u name u flavor u id u u name u micro u az u nova u id u u security groups u name u default u user id u u os dcf diskconfig u manual u networks u internal u u u u u cloud u envvars u key name u username u progress u os ext sts power state u interface ip u u config drive u u status u active u updated u u hostid u u human id true u os srv usg terminated at none u public u u public u u private u u os ext az availability zone u nova u name u singlebox u created u u tenant id u u region u regionone u os extended volumes volumes attached u volumes u metadata u human id u singlebox u id u ,1 89773,10616618149.0,IssuesEvent,2019-10-12 13:09:49,neutralinojs/neutralinojs,https://api.github.com/repos/neutralinojs/neutralinojs,opened,Add contributors list to README,documentation,"Use a quick tool like https://dev.to/lacolaco/introducing-contributors-img-keep-contributors-in-readme-md-gci We need to easily update when there are new contributors ",1.0,"Add contributors list to README - Use a quick tool like https://dev.to/lacolaco/introducing-contributors-img-keep-contributors-in-readme-md-gci We need to easily update when there are new contributors ",0,add contributors list to readme use a quick tool like we need to easily update when there are new contributors ,0 332689,24347920600.0,IssuesEvent,2022-10-02 15:14:54,ICEI-PUC-Minas-PMV-ADS/pmv-ads-2022-2-e1-proj-web-t7-planejamento-orcamentario,https://api.github.com/repos/ICEI-PUC-Minas-PMV-ADS/pmv-ads-2022-2-e1-proj-web-t7-planejamento-orcamentario,reopened,Contextualizar o projeto,documentation," documentação de contexto é um texto descritivo com a visão geral do projeto abordado, que inclui o contexto, o problema, os objetivos, a justificativa e o público-alvo do projeto.",1.0,"Contextualizar o projeto - documentação de contexto é um texto descritivo com a visão geral do projeto abordado, que inclui o contexto, o problema, os objetivos, a justificativa e o público-alvo do projeto.",0,contextualizar o projeto documentação de contexto é um texto descritivo com a visão geral do projeto abordado que inclui o contexto o problema os objetivos a justificativa e o público alvo do projeto ,0 815533,30560275691.0,IssuesEvent,2023-07-20 14:13:09,intersystems-community/vscode-objectscript,https://api.github.com/repos/intersystems-community/vscode-objectscript,closed,revisit support export/import xml,enhancement upstream priority/important-soon,"Hi All, with the deprecation of InterSystems Studio, can we look into getting export/import of xml packaged source code implemented? As a developer i often have the issue that i need to transport code from e.g. a non CCR controlled scratch environment to the BASE environment which then is under CCR source control. This usually is done via ISC Studio export/import functionality. ISC Studio import functionality also triggers the question on source controlled environment to add all imported code to source control (here CCR). Using UDL as the export format can be prohibitive if the code consists of multiple classes/routines in multiple packages. (e.g. InterSystems Integration projects) Also the current export functionality is rather clunky in large namespaces (i.e. thousands of classes) and also i found a bit hit and miss in regards to it working (got it working 2 out of 10 tries, usually not working when i try to present the functionality to others)",1.0,"revisit support export/import xml - Hi All, with the deprecation of InterSystems Studio, can we look into getting export/import of xml packaged source code implemented? As a developer i often have the issue that i need to transport code from e.g. a non CCR controlled scratch environment to the BASE environment which then is under CCR source control. This usually is done via ISC Studio export/import functionality. ISC Studio import functionality also triggers the question on source controlled environment to add all imported code to source control (here CCR). Using UDL as the export format can be prohibitive if the code consists of multiple classes/routines in multiple packages. (e.g. InterSystems Integration projects) Also the current export functionality is rather clunky in large namespaces (i.e. thousands of classes) and also i found a bit hit and miss in regards to it working (got it working 2 out of 10 tries, usually not working when i try to present the functionality to others)",0,revisit support export import xml hi all with the deprecation of intersystems studio can we look into getting export import of xml packaged source code implemented as a developer i often have the issue that i need to transport code from e g a non ccr controlled scratch environment to the base environment which then is under ccr source control this usually is done via isc studio export import functionality isc studio import functionality also triggers the question on source controlled environment to add all imported code to source control here ccr using udl as the export format can be prohibitive if the code consists of multiple classes routines in multiple packages e g intersystems integration projects also the current export functionality is rather clunky in large namespaces i e thousands of classes and also i found a bit hit and miss in regards to it working got it working out of tries usually not working when i try to present the functionality to others ,0 89971,25939116341.0,IssuesEvent,2022-12-16 16:38:03,TrueBlocks/trueblocks-docker,https://api.github.com/repos/TrueBlocks/trueblocks-docker,closed,Choose consistent convention for tagging releases,enhancement TB-build,"In the docker versions, we use `0.40.0-beta` for version tagging. In the core repo, we use `v0.40.0-beta` for tagging. I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it. Choices: 1) leave them different 2) switch to `0.40.0-beta` for all repos 3) switch to `v0.40.0-beta` for all repos ",1.0,"Choose consistent convention for tagging releases - In the docker versions, we use `0.40.0-beta` for version tagging. In the core repo, we use `v0.40.0-beta` for tagging. I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it. Choices: 1) leave them different 2) switch to `0.40.0-beta` for all repos 3) switch to `v0.40.0-beta` for all repos ",0,choose consistent convention for tagging releases in the docker versions we use beta for version tagging in the core repo we use beta for tagging i prefer beta format with the v but it seems counter to the way docker does it choices leave them different switch to beta for all repos switch to beta for all repos ,0 920,4622139259.0,IssuesEvent,2016-09-27 06:04:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service: timeout option not respected,affects_2.1 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### SUMMARY The `timeout` option specified in a docker_service task seems to be not actually used. ##### STEPS TO REPRODUCE SEE https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_service.py#L837-L862 ",True,"docker_service: timeout option not respected - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_service ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### SUMMARY The `timeout` option specified in a docker_service task seems to be not actually used. ##### STEPS TO REPRODUCE SEE https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_service.py#L837-L862 ",1,docker service timeout option not respected issue type bug report component name docker service ansible version ansible summary the timeout option specified in a docker service task seems to be not actually used steps to reproduce see ,1 1724,6574505992.0,IssuesEvent,2017-09-11 13:08:36,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,yum list installed doesn't show source repo,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - yum ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/ak/ansible/webservers/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Ansible Host: CentOS7 Managed: CentOS7 ##### SUMMARY no repo name is provided for yum list=installed ##### STEPS TO REPRODUCE ``` tasks: - name: yum list yum: list=installed register: output - name: show all debug: ""msg={{ output.results }}"" ``` ##### EXPECTED RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""@base"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""@anaconda"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` ##### ACTUAL RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""installed"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""installed"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` Also: compare with `yum list installed` in command-line: ``` yum-utils.noarch 1.1.31-34.el7 @base zlib.x86_64 1.2.7-15.el7 @anaconda ``` ",True,"yum list installed doesn't show source repo - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - yum ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/ak/ansible/webservers/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Ansible Host: CentOS7 Managed: CentOS7 ##### SUMMARY no repo name is provided for yum list=installed ##### STEPS TO REPRODUCE ``` tasks: - name: yum list yum: list=installed register: output - name: show all debug: ""msg={{ output.results }}"" ``` ##### EXPECTED RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""@base"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""@anaconda"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` ##### ACTUAL RESULTS ``` }, { ""arch"": ""noarch"", ""epoch"": ""0"", ""name"": ""yum-utils"", ""nevra"": ""0:yum-utils-1.1.31-34.el7.noarch"", ""release"": ""34.el7"", ""repo"": ""installed"", ""version"": ""1.1.31"", ""yumstate"": ""installed"" }, { ""arch"": ""x86_64"", ""epoch"": ""0"", ""name"": ""zlib"", ""nevra"": ""0:zlib-1.2.7-15.el7.x86_64"", ""release"": ""15.el7"", ""repo"": ""installed"", ""version"": ""1.2.7"", ""yumstate"": ""installed"" } ``` Also: compare with `yum list installed` in command-line: ``` yum-utils.noarch 1.1.31-34.el7 @base zlib.x86_64 1.2.7-15.el7 @anaconda ``` ",1,yum list installed doesn t show source repo issue type bug report component name yum ansible version ansible config file home ak ansible webservers ansible cfg configured module search path default w o overrides os environment ansible host managed summary no repo name is provided for yum list installed steps to reproduce tasks name yum list yum list installed register output name show all debug msg output results expected results arch noarch epoch name yum utils nevra yum utils noarch release repo base version yumstate installed arch epoch name zlib nevra zlib release repo anaconda version yumstate installed actual results arch noarch epoch name yum utils nevra yum utils noarch release repo installed version yumstate installed arch epoch name zlib nevra zlib release repo installed version yumstate installed also compare with yum list installed in command line yum utils noarch base zlib anaconda ,1 35946,9691016116.0,IssuesEvent,2019-05-24 10:02:11,Lundalogik/lip,https://api.github.com/repos/Lundalogik/lip,opened,Support for Chromium,bug package builder,"The package builder does not work with Chromium. This needs to be fixed since the desktop client will move completely to chromium within a few releases. Tricky situation: We want the package builder to work also for customers not using Chromium yet. Or can we say that once we fix support for Chromium we will release a new major version and customers using older DC will not be able to use the package builder?",1.0,"Support for Chromium - The package builder does not work with Chromium. This needs to be fixed since the desktop client will move completely to chromium within a few releases. Tricky situation: We want the package builder to work also for customers not using Chromium yet. Or can we say that once we fix support for Chromium we will release a new major version and customers using older DC will not be able to use the package builder?",0,support for chromium the package builder does not work with chromium this needs to be fixed since the desktop client will move completely to chromium within a few releases tricky situation we want the package builder to work also for customers not using chromium yet or can we say that once we fix support for chromium we will release a new major version and customers using older dc will not be able to use the package builder ,0 117923,9965453726.0,IssuesEvent,2019-07-08 08:48:41,ubtue/DatenProbleme,https://api.github.com/repos/ubtue/DatenProbleme,closed,ISSN 2150-9301 Religion and society : advances in research Abstract und Keywords,Zotero_AUTO ready for testing,"Bei den Artikeln stehen Abstract und Keywords. https://www.berghahnjournals.com/view/journals/religion-and-society/9/1/arrs090103.xml Beides wird nicht übertragen. ",1.0,"ISSN 2150-9301 Religion and society : advances in research Abstract und Keywords - Bei den Artikeln stehen Abstract und Keywords. https://www.berghahnjournals.com/view/journals/religion-and-society/9/1/arrs090103.xml Beides wird nicht übertragen. ",0,issn religion and society advances in research abstract und keywords bei den artikeln stehen abstract und keywords beides wird nicht übertragen ,0 1717,6574472923.0,IssuesEvent,2017-09-11 13:01:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Copy Module Fails with relatively large files,affects_2.3 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Copy module ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel 32a7b4ce71) last updated 2016/11/03 11:04:12 (GMT -500) lib/ansible/modules/core: (detached HEAD 7cc4d3fe04) last updated 2016/11/03 11:04:33 (GMT -500) lib/ansible/modules/extras: (detached HEAD e4bc618956) last updated 2016/11/03 11:04:54 (GMT -500) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat /etc/ansible/hosts local ansible_host=127.0.0.1 test_host ansible_host=192.168.56.50 $ egrep -v ""^#|^$"" /etc/ansible/ansible.cfg [defaults] log_path = /var/log/ansible.log [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT ``` $ hostnamectl Static hostname: ansible Icon name: computer-vm Chassis: vm Boot ID: 44bef02e34ee4cad9ddf55df52cb03c5 Operating System: Ubuntu 14.04.5 LTS Kernel: Linux 3.13.0-100-generic Architecture: x86_64 ``` ##### SUMMARY When trying to copy a large file (jdk installer 195Mb) the copy module fails, but works perfectly with small files (conf files for example) ##### STEPS TO REPRODUCE Run an ad-hoc command with the large file ``` ansible -vvv test_host -s -m copy -a 'src=/vagrant/jdk-8u111-windows-x64.exe dest=/var/www/html/ owner=www-data group=www-data mode=0644' ``` ##### EXPECTED RESULTS Expected a successfully file copy ##### ACTUAL RESULTS It fails with MemoryError ``` Using /etc/ansible/ansible.cfg as config file An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 490, in _execute result = self._handler.run(task_vars=variables) File ""/home/vagrant/ansible/lib/ansible/plugins/action/copy.py"", line 157, in run source_full = self._loader.get_real_file(source_full) File ""/home/vagrant/ansible/lib/ansible/parsing/dataloader.py"", line 402, in get_real_file if is_encrypted_file(f): File ""/home/vagrant/ansible/lib/ansible/parsing/vault/__init__.py"", line 152, in is_encrypted_file b_vaulttext = to_bytes(to_text(vaulttext, encoding='ascii', errors='strict'), encoding='ascii', errors='strict') File ""/home/vagrant/ansible/lib/ansible/module_utils/_text.py"", line 177, in to_text return obj.decode(encoding, errors) MemoryError test_host | FAILED! => { ""failed"": true, ""msg"": ""Unexpected failure during module execution."", ""stdout"": """" } ```",True,"Copy Module Fails with relatively large files - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Copy module ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel 32a7b4ce71) last updated 2016/11/03 11:04:12 (GMT -500) lib/ansible/modules/core: (detached HEAD 7cc4d3fe04) last updated 2016/11/03 11:04:33 (GMT -500) lib/ansible/modules/extras: (detached HEAD e4bc618956) last updated 2016/11/03 11:04:54 (GMT -500) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat /etc/ansible/hosts local ansible_host=127.0.0.1 test_host ansible_host=192.168.56.50 $ egrep -v ""^#|^$"" /etc/ansible/ansible.cfg [defaults] log_path = /var/log/ansible.log [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT ``` $ hostnamectl Static hostname: ansible Icon name: computer-vm Chassis: vm Boot ID: 44bef02e34ee4cad9ddf55df52cb03c5 Operating System: Ubuntu 14.04.5 LTS Kernel: Linux 3.13.0-100-generic Architecture: x86_64 ``` ##### SUMMARY When trying to copy a large file (jdk installer 195Mb) the copy module fails, but works perfectly with small files (conf files for example) ##### STEPS TO REPRODUCE Run an ad-hoc command with the large file ``` ansible -vvv test_host -s -m copy -a 'src=/vagrant/jdk-8u111-windows-x64.exe dest=/var/www/html/ owner=www-data group=www-data mode=0644' ``` ##### EXPECTED RESULTS Expected a successfully file copy ##### ACTUAL RESULTS It fails with MemoryError ``` Using /etc/ansible/ansible.cfg as config file An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 119, in run res = self._execute() File ""/home/vagrant/ansible/lib/ansible/executor/task_executor.py"", line 490, in _execute result = self._handler.run(task_vars=variables) File ""/home/vagrant/ansible/lib/ansible/plugins/action/copy.py"", line 157, in run source_full = self._loader.get_real_file(source_full) File ""/home/vagrant/ansible/lib/ansible/parsing/dataloader.py"", line 402, in get_real_file if is_encrypted_file(f): File ""/home/vagrant/ansible/lib/ansible/parsing/vault/__init__.py"", line 152, in is_encrypted_file b_vaulttext = to_bytes(to_text(vaulttext, encoding='ascii', errors='strict'), encoding='ascii', errors='strict') File ""/home/vagrant/ansible/lib/ansible/module_utils/_text.py"", line 177, in to_text return obj.decode(encoding, errors) MemoryError test_host | FAILED! => { ""failed"": true, ""msg"": ""Unexpected failure during module execution."", ""stdout"": """" } ```",1,copy module fails with relatively large files issue type bug report component name copy module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides configuration cat etc ansible hosts local ansible host test host ansible host egrep v etc ansible ansible cfg log path var log ansible log os environment hostnamectl static hostname ansible icon name computer vm chassis vm boot id operating system ubuntu lts kernel linux generic architecture summary when trying to copy a large file jdk installer the copy module fails but works perfectly with small files conf files for example steps to reproduce run an ad hoc command with the large file ansible vvv test host s m copy a src vagrant jdk windows exe dest var www html owner www data group www data mode expected results expected a successfully file copy actual results it fails with memoryerror using etc ansible ansible cfg as config file an exception occurred during task execution the full traceback is traceback most recent call last file home vagrant ansible lib ansible executor task executor py line in run res self execute file home vagrant ansible lib ansible executor task executor py line in execute result self handler run task vars variables file home vagrant ansible lib ansible plugins action copy py line in run source full self loader get real file source full file home vagrant ansible lib ansible parsing dataloader py line in get real file if is encrypted file f file home vagrant ansible lib ansible parsing vault init py line in is encrypted file b vaulttext to bytes to text vaulttext encoding ascii errors strict encoding ascii errors strict file home vagrant ansible lib ansible module utils text py line in to text return obj decode encoding errors memoryerror test host failed failed true msg unexpected failure during module execution stdout ,1 203014,15864980746.0,IssuesEvent,2021-04-08 14:17:36,tskit-dev/msprime,https://api.github.com/repos/tskit-dev/msprime,closed,Examples for coalescence rates and mean times,documentation,"#1614 added some content for explaining the numerical methods on the demography debugger, and added a TODO section for the coalescence_rate_trajectory() and mean_coalescence_time() methods. I decided I wasn't the right person to explain these. @petrelharp, @apragsdale, would one of you be able to take this up? We can probably reuse the examples from whichever paper we used this on, right?",1.0,"Examples for coalescence rates and mean times - #1614 added some content for explaining the numerical methods on the demography debugger, and added a TODO section for the coalescence_rate_trajectory() and mean_coalescence_time() methods. I decided I wasn't the right person to explain these. @petrelharp, @apragsdale, would one of you be able to take this up? We can probably reuse the examples from whichever paper we used this on, right?",0,examples for coalescence rates and mean times added some content for explaining the numerical methods on the demography debugger and added a todo section for the coalescence rate trajectory and mean coalescence time methods i decided i wasn t the right person to explain these petrelharp apragsdale would one of you be able to take this up we can probably reuse the examples from whichever paper we used this on right ,0 516983,14992262388.0,IssuesEvent,2021-01-29 09:37:18,buddyboss/buddyboss-platform,https://api.github.com/repos/buddyboss/buddyboss-platform,opened,The Groups is not appearing in the search result when the Location auto-complete for BuddyPress activate ,bug priority: medium,"**Describe the bug** The Groups is not appearing in the search result when the Location auto-complete for BuddyPress activate **To Reproduce** Steps to reproduce the behavior: https://www.loom.com/share/69e3b40ffd524a5989821488362dcf28 **Support ticket links** https://secure.helpscout.net/conversation/1407830238/122145?folderId=3955985",1.0,"The Groups is not appearing in the search result when the Location auto-complete for BuddyPress activate - **Describe the bug** The Groups is not appearing in the search result when the Location auto-complete for BuddyPress activate **To Reproduce** Steps to reproduce the behavior: https://www.loom.com/share/69e3b40ffd524a5989821488362dcf28 **Support ticket links** https://secure.helpscout.net/conversation/1407830238/122145?folderId=3955985",0,the groups is not appearing in the search result when the location auto complete for buddypress activate describe the bug the groups is not appearing in the search result when the location auto complete for buddypress activate to reproduce steps to reproduce the behavior support ticket links ,0 17607,4174961541.0,IssuesEvent,2016-06-21 15:29:06,telerik/kendo-ui-core,https://api.github.com/repos/telerik/kendo-ui-core,opened,Document limited touch gesture support when multiple Grid features rely on it,Documentation,1043467 (Drawer + virtual Grid),1.0,Document limited touch gesture support when multiple Grid features rely on it - 1043467 (Drawer + virtual Grid),0,document limited touch gesture support when multiple grid features rely on it drawer virtual grid ,0 271130,29299164564.0,IssuesEvent,2023-05-25 01:06:39,panasalap/linux-4.19.72_test1,https://api.github.com/repos/panasalap/linux-4.19.72_test1,opened,CVE-2023-33203 (Medium) detected in linux-yoctov5.4.51,Mend: dependency security vulnerability,"## CVE-2023-33203 - Medium Severity Vulnerability
Vulnerable Library - linux-yoctov5.4.51

Yocto Linux Embedded kernel

Library home page: https://git.yoctoproject.org/git/linux-yocto

Found in HEAD commit: f1b7c617b9b8f4135ab2f75a0c407cc44d43683f

Found in base branch: master

Vulnerable Source Files (2)

/drivers/net/ethernet/qualcomm/emac/emac.c /drivers/net/ethernet/qualcomm/emac/emac.c

Vulnerability Details

The Linux kernel before 6.2.9 has a race condition and resultant use-after-free in drivers/net/ethernet/qualcomm/emac/emac.c if a physically proximate attacker unplugs an emac based device.

Publish Date: 2023-05-18

URL: CVE-2023-33203

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2023-33203

Release Date: 2023-05-18

Fix Resolution: v4.14.312,v4.19.280,v5.4.240,v5.10.177,v5.15.105,v6.1.22,v6.2.9

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2023-33203 (Medium) detected in linux-yoctov5.4.51 - ## CVE-2023-33203 - Medium Severity Vulnerability
Vulnerable Library - linux-yoctov5.4.51

Yocto Linux Embedded kernel

Library home page: https://git.yoctoproject.org/git/linux-yocto

Found in HEAD commit: f1b7c617b9b8f4135ab2f75a0c407cc44d43683f

Found in base branch: master

Vulnerable Source Files (2)

/drivers/net/ethernet/qualcomm/emac/emac.c /drivers/net/ethernet/qualcomm/emac/emac.c

Vulnerability Details

The Linux kernel before 6.2.9 has a race condition and resultant use-after-free in drivers/net/ethernet/qualcomm/emac/emac.c if a physically proximate attacker unplugs an emac based device.

Publish Date: 2023-05-18

URL: CVE-2023-33203

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Origin: https://www.linuxkernelcves.com/cves/CVE-2023-33203

Release Date: 2023-05-18

Fix Resolution: v4.14.312,v4.19.280,v5.4.240,v5.10.177,v5.15.105,v6.1.22,v6.2.9

*** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers net ethernet qualcomm emac emac c drivers net ethernet qualcomm emac emac c vulnerability details the linux kernel before has a race condition and resultant use after free in drivers net ethernet qualcomm emac emac c if a physically proximate attacker unplugs an emac based device publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0 169240,13131095540.0,IssuesEvent,2020-08-06 16:24:26,HEPCloud/decisionengine,https://api.github.com/repos/HEPCloud/decisionengine,closed,log_level issues in new decisionengine 1.2.0-1,fixed_in_rc prj_testing,"I am trying to test the new functionality which was implemented for issue !84. The logger section of the /etc/decisionengine/decision_engine.conf is below 'logger' : {'log_file': '/var/log/decisionengine/decision_engine_log', 'max_file_size': 200*1000000, 'max_backup_count': 6, 'log_level': ""DEBUG"", }, But although I am running seven channels I do not see any DEBUG entries in any of the logs. Is any further configuration necessary? Also what is the syntax to set the log level on a channel by channel basis. This is set up on fermicloud117.fnal.gov right now, I can give root login if needed. Steve Timm ",1.0,"log_level issues in new decisionengine 1.2.0-1 - I am trying to test the new functionality which was implemented for issue !84. The logger section of the /etc/decisionengine/decision_engine.conf is below 'logger' : {'log_file': '/var/log/decisionengine/decision_engine_log', 'max_file_size': 200*1000000, 'max_backup_count': 6, 'log_level': ""DEBUG"", }, But although I am running seven channels I do not see any DEBUG entries in any of the logs. Is any further configuration necessary? Also what is the syntax to set the log level on a channel by channel basis. This is set up on fermicloud117.fnal.gov right now, I can give root login if needed. Steve Timm ",0,log level issues in new decisionengine i am trying to test the new functionality which was implemented for issue the logger section of the etc decisionengine decision engine conf is below logger log file var log decisionengine decision engine log max file size max backup count log level debug but although i am running seven channels i do not see any debug entries in any of the logs is any further configuration necessary also what is the syntax to set the log level on a channel by channel basis this is set up on fnal gov right now i can give root login if needed steve timm ,0 172877,27345673735.0,IssuesEvent,2023-02-27 04:35:42,AlphaWallet/alpha-wallet-ios,https://api.github.com/repos/AlphaWallet/alpha-wallet-ios,closed,Remove mutually exclusive mainnet and testnet modes so testnets can be selected once they are enabled,Design Phase 2: To consider,"**Zeplin:** https://zpl.io/g8oZleA **Changes:** - Change the headline to ""Select Active Networks"" - Update the plus icon https://zpl.io/beqJEqq - Update the network icons https://zpl.io/agE1LeZ - When you toggle Testnet on, you will see a warning about the Monopoly money - When you hit OK on the warning screen, the list of networks expands by testnet networks - We cancel Testnet mode where you only see Mainnet or Testnet at a time. They can be both displayed, so we are coming back to the same flow as previously - At the bottom of the list you have a Browse More button, it takes you to the same screen as plus button at the top - browse all other networks - Add placeholder network icons for all remaining networks - https://zpl.io/jZE1A5m ",1.0,"Remove mutually exclusive mainnet and testnet modes so testnets can be selected once they are enabled - **Zeplin:** https://zpl.io/g8oZleA **Changes:** - Change the headline to ""Select Active Networks"" - Update the plus icon https://zpl.io/beqJEqq - Update the network icons https://zpl.io/agE1LeZ - When you toggle Testnet on, you will see a warning about the Monopoly money - When you hit OK on the warning screen, the list of networks expands by testnet networks - We cancel Testnet mode where you only see Mainnet or Testnet at a time. They can be both displayed, so we are coming back to the same flow as previously - At the bottom of the list you have a Browse More button, it takes you to the same screen as plus button at the top - browse all other networks - Add placeholder network icons for all remaining networks - https://zpl.io/jZE1A5m ",0,remove mutually exclusive mainnet and testnet modes so testnets can be selected once they are enabled zeplin changes change the headline to select active networks update the plus icon update the network icons when you toggle testnet on you will see a warning about the monopoly money when you hit ok on the warning screen the list of networks expands by testnet networks we cancel testnet mode where you only see mainnet or testnet at a time they can be both displayed so we are coming back to the same flow as previously at the bottom of the list you have a browse more button it takes you to the same screen as plus button at the top browse all other networks add placeholder network icons for all remaining networks ,0 554081,16388596207.0,IssuesEvent,2021-05-17 13:37:54,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.fedex.com - site is not usable,browser-firefox-ios os-ios priority-important," **URL**: https://www.fedex.com/fedextrack/?trknbr=787095647716 **Browser / Version**: Firefox iOS 33.1 **Operating System**: iOS 14.4.2 **Tested Another Browser**: Yes Safari **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: After signing into account, infinite page reload cycle m m tracking date
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.fedex.com - site is not usable - **URL**: https://www.fedex.com/fedextrack/?trknbr=787095647716 **Browser / Version**: Firefox iOS 33.1 **Operating System**: iOS 14.4.2 **Tested Another Browser**: Yes Safari **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: After signing into account, infinite page reload cycle m m tracking date
Browser Configuration
  • None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0, site is not usable url browser version firefox ios operating system ios tested another browser yes safari problem type site is not usable description page not loading correctly steps to reproduce after signing into account infinite page reload cycle m m tracking date browser configuration none from with ❤️ ,0 402052,11801475513.0,IssuesEvent,2020-03-18 19:31:56,googlemaps/android-maps-utils,https://api.github.com/repos/googlemaps/android-maps-utils,opened,Define build stages so that deployment only occurs when all tests pass,priority: p2 type: feature request," **Is your feature request related to a problem? Please describe.** Whenever a tag is pushed, each build matrix will trigger a deployment resulting in several archives submitted to Sonatype. Only one submission should be done. E.g. ![image](https://user-images.githubusercontent.com/463186/76999683-453b5480-6914-11ea-92dc-fd65a149721b.png) **Describe the solution you'd like** We should define [build stages](https://docs.travis-ci.com/user/build-stages) so that deployment only occurs when all tests pass. **Describe alternatives you've considered** N/A",1.0,"Define build stages so that deployment only occurs when all tests pass - **Is your feature request related to a problem? Please describe.** Whenever a tag is pushed, each build matrix will trigger a deployment resulting in several archives submitted to Sonatype. Only one submission should be done. E.g. ![image](https://user-images.githubusercontent.com/463186/76999683-453b5480-6914-11ea-92dc-fd65a149721b.png) **Describe the solution you'd like** We should define [build stages](https://docs.travis-ci.com/user/build-stages) so that deployment only occurs when all tests pass. **Describe alternatives you've considered** N/A",0,define build stages so that deployment only occurs when all tests pass is your feature request related to a problem please describe whenever a tag is pushed each build matrix will trigger a deployment resulting in several archives submitted to sonatype only one submission should be done e g describe the solution you d like we should define so that deployment only occurs when all tests pass describe alternatives you ve considered n a,0 963,4706289311.0,IssuesEvent,2016-10-13 16:42:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ansible-modules-core/network/ - Code review,affects_2.2 bug_report in progress networking P1 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_facts ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 70e63ddf6c) last updated 2016/09/15 10:17:19 (GMT +100) lib/ansible/modules/core: (devel 683e5e4d1a) last updated 2016/09/15 10:17:22 (GMT +100) lib/ansible/modules/extras: (devel 170adf16bd) last updated 2016/09/15 10:17:23 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I've raised one issue to track all the issues found rather than having a fairly bitty chain of tickets. If it's easier for you to raise different PRs to address the issues found I'm no issue with that - whatever is easiest for you. I'm wondering if for items we are happy with we should add ignore markers in, as shown here http://stackoverflow.com/questions/28829236/is-it-possible-to-ignore-one-single-specific-line-with-pylint ``` pylint -E network/*/* No config file found, using default configuration ************* Module ansible.modules.core.network.nxos.nxos_hsrp E:402,41: Undefined variable 'module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_interface E:147,19: Instance of 'list' has no 'split' member (no-member) E:535,56: Undefined variable 'command' (undefined-variable) E:581,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_static_route E:158,23: Instance of 'CustomNetworkConfig' has no 'to_lines' member (no-member) E:295,26: Instance of 'list' has no 'split' member (no-member) E:402,66: Using variable 'address' before assignment (used-before-assignment) ************* Module ansible.modules.core.network.nxos.nxos_switchport E:486,56: Undefined variable 'command' (undefined-variable) E:527,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_vrf E:501,52: Undefined variable 'cmds' (undefined-variable) ``` ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",True,"ansible-modules-core/network/ - Code review - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME eos_facts ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 70e63ddf6c) last updated 2016/09/15 10:17:19 (GMT +100) lib/ansible/modules/core: (devel 683e5e4d1a) last updated 2016/09/15 10:17:22 (GMT +100) lib/ansible/modules/extras: (devel 170adf16bd) last updated 2016/09/15 10:17:23 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY I've raised one issue to track all the issues found rather than having a fairly bitty chain of tickets. If it's easier for you to raise different PRs to address the issues found I'm no issue with that - whatever is easiest for you. I'm wondering if for items we are happy with we should add ignore markers in, as shown here http://stackoverflow.com/questions/28829236/is-it-possible-to-ignore-one-single-specific-line-with-pylint ``` pylint -E network/*/* No config file found, using default configuration ************* Module ansible.modules.core.network.nxos.nxos_hsrp E:402,41: Undefined variable 'module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_interface E:147,19: Instance of 'list' has no 'split' member (no-member) E:535,56: Undefined variable 'command' (undefined-variable) E:581,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_static_route E:158,23: Instance of 'CustomNetworkConfig' has no 'to_lines' member (no-member) E:295,26: Instance of 'list' has no 'split' member (no-member) E:402,66: Using variable 'address' before assignment (used-before-assignment) ************* Module ansible.modules.core.network.nxos.nxos_switchport E:486,56: Undefined variable 'command' (undefined-variable) E:527,13: Undefined variable 'get_module' (undefined-variable) ************* Module ansible.modules.core.network.nxos.nxos_vrf E:501,52: Undefined variable 'cmds' (undefined-variable) ``` ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",1,ansible modules core network code review issue type bug report component name eos facts ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary i ve raised one issue to track all the issues found rather than having a fairly bitty chain of tickets if it s easier for you to raise different prs to address the issues found i m no issue with that whatever is easiest for you i m wondering if for items we are happy with we should add ignore markers in as shown here pylint e network no config file found using default configuration module ansible modules core network nxos nxos hsrp e undefined variable module undefined variable module ansible modules core network nxos nxos interface e instance of list has no split member no member e undefined variable command undefined variable e undefined variable get module undefined variable module ansible modules core network nxos nxos static route e instance of customnetworkconfig has no to lines member no member e instance of list has no split member no member e using variable address before assignment used before assignment module ansible modules core network nxos nxos switchport e undefined variable command undefined variable e undefined variable get module undefined variable module ansible modules core network nxos nxos vrf e undefined variable cmds undefined variable steps to reproduce expected results actual results ,1 916,4621653846.0,IssuesEvent,2016-09-27 02:43:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"junos_command errors out with ""TypeError: Type 'str' cannot be serialized""",affects_2.1 bug_report in progress networking P2 waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_command core module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes to configuration ##### OS / ENVIRONMENT $ uname -a Linux dev-net-01 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY I have an Ansible script where i am simply using junos_command module to get users list from Juniper switch, below is the snippet of my code. I keep getting the RuntimeWarning and TypeError: type 'str' cannot be serialized, whenever i try to run this. Moreover I have been successfully able to run commands like 'show version' using the below code itself. But just not 'show configuration system login' command. Please look into this. **Script:** name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch **Error:** TASK [/ GET USERS / Get list of all the current users on switch] *************** fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only! \n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last): \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 261, in \n main() \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 233, in main \n xmlout.append(xml_to_string(response[index])) \n File \""/tmp/ansible_lVOmPp/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized. \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ##### STEPS TO REPRODUCE Mentioned in above section ``` name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch ``` ##### EXPECTED RESULTS returns the list of users on juniper switch. no error should be expected. ##### ACTUAL RESULTS ``` TASK [/ GET USERS / Get list of all the current users on switch] *************** EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" && echo ansible-tmp-1472681123.92-107492843053729=""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" ) && sleep 0' PUT /tmp/tmpU9G6IE TO /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command; rm -rf ""/home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/"" > /dev/null 2>&1 && sleep 0' fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""junos_command""}, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!\n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last):\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 261, in \n main()\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 233, in main\n xmlout.append(xml_to_string(response[index]))\n File \""/tmp/ansible_mdpif7/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"junos_command errors out with ""TypeError: Type 'str' cannot be serialized"" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_command core module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes to configuration ##### OS / ENVIRONMENT $ uname -a Linux dev-net-01 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY I have an Ansible script where i am simply using junos_command module to get users list from Juniper switch, below is the snippet of my code. I keep getting the RuntimeWarning and TypeError: type 'str' cannot be serialized, whenever i try to run this. Moreover I have been successfully able to run commands like 'show version' using the below code itself. But just not 'show configuration system login' command. Please look into this. **Script:** name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch **Error:** TASK [/ GET USERS / Get list of all the current users on switch] *************** fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only! \n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last): \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 261, in \n main() \n File \""/tmp/ansible_lVOmPp/ansible_module_junos_command.py\"", line 233, in main \n xmlout.append(xml_to_string(response[index])) \n File \""/tmp/ansible_lVOmPp/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized. \n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ##### STEPS TO REPRODUCE Mentioned in above section ``` name: / GET USERS / Get list of all the current users on switch action: junos_command args: { commands: 'show configuration system login', provider: ""{{ netconf }}"" } register: curr_users_on_switch ``` ##### EXPECTED RESULTS returns the list of users on juniper switch. no error should be expected. ##### ACTUAL RESULTS ``` TASK [/ GET USERS / Get list of all the current users on switch] *************** EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" && echo ansible-tmp-1472681123.92-107492843053729=""` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `"" ) && sleep 0' PUT /tmp/tmpU9G6IE TO /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command; rm -rf ""/home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/"" > /dev/null 2>&1 && sleep 0' fatal: [rlab-er1]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""junos_command""}, ""module_stderr"": ""/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!\n warnings.warn(\""CLI command is for debug use only!\"", RuntimeWarning)\nTraceback (most recent call last):\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 261, in \n main()\n File \""/tmp/ansible_mdpif7/ansible_module_junos_command.py\"", line 233, in main\n xmlout.append(xml_to_string(response[index]))\n File \""/tmp/ansible_mdpif7/ansible_modlib.zip/ansible/module_utils/junos.py\"", line 79, in xml_to_string\n File \""src/lxml/lxml.etree.pyx\"", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,junos command errors out with typeerror type str cannot be serialized issue type bug report component name junos command core module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes to configuration os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific uname a linux dev net generic ubuntu smp wed jul utc gnu linux summary i have an ansible script where i am simply using junos command module to get users list from juniper switch below is the snippet of my code i keep getting the runtimewarning and typeerror type str cannot be serialized whenever i try to run this moreover i have been successfully able to run commands like show version using the below code itself but just not show configuration system login command please look into this script name get users get list of all the current users on switch action junos command args commands show configuration system login provider netconf register curr users on switch error task fatal failed changed false failed true module stderr home mbhadoria local lib site packages jnpr junos device py runtimewarning cli command is for debug use only n warnings warn cli command is for debug use only runtimewarning ntraceback most recent call last n file tmp ansible lvompp ansible module junos command py line in n main n file tmp ansible lvompp ansible module junos command py line in main n xmlout append xml to string response n file tmp ansible lvompp ansible modlib zip ansible module utils junos py line in xml to string n file src lxml lxml etree pyx line in lxml etree tostring src lxml lxml etree c ntypeerror type str cannot be serialized n module stdout msg module failure parsed false steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used mentioned in above section name get users get list of all the current users on switch action junos command args commands show configuration system login provider netconf register curr users on switch expected results returns the list of users on juniper switch no error should be expected actual results task exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home mbhadoria ansible tmp ansible tmp junos command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home mbhadoria ansible tmp ansible tmp junos command rm rf home mbhadoria ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name junos command module stderr home mbhadoria local lib site packages jnpr junos device py runtimewarning cli command is for debug use only n warnings warn cli command is for debug use only runtimewarning ntraceback most recent call last n file tmp ansible ansible module junos command py line in n main n file tmp ansible ansible module junos command py line in main n xmlout append xml to string response n file tmp ansible ansible modlib zip ansible module utils junos py line in xml to string n file src lxml lxml etree pyx line in lxml etree tostring src lxml lxml etree c ntypeerror type str cannot be serialized n module stdout msg module failure parsed false ,1 1226,5218843895.0,IssuesEvent,2017-01-26 17:27:01,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apache2_module fails for PHP 5.6 even though it is already enabled,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/nick/Workspace/-redacted-/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION hostfile & roles_path ##### OS / ENVIRONMENT Running Ansible on macOS Sierra, target server is Ubuntu Xenial ##### SUMMARY Enabling the Apache2 module ""[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)"" with apache2_module fails even though the module is already enabled. This is the same problem as #5559 and #4744 but with a different package. This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`. ##### STEPS TO REPRODUCE ``` - name: Enable PHP 5.6 apache2_module: state=present name=php5.6 ``` ##### ACTUAL RESULTS ``` failed: [nicksherlock.com] (item=php5.6) => { ""failed"": true, ""invocation"": { ""module_args"": { ""force"": false, ""name"": ""php5.6"", ""state"": ""present"" }, ""module_name"": ""apache2_module"" }, ""item"": ""php5.6"", ""msg"": ""Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""rc"": 0, ""stderr"": """", ""stdout"": ""Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""stdout_lines"": [ ""Considering dependency mpm_prefork for php5.6:"", ""Considering conflict mpm_event for mpm_prefork:"", ""Considering conflict mpm_worker for mpm_prefork:"", ""Module mpm_prefork already enabled"", ""Considering conflict php5 for php5.6:"", ""Module php5.6 already enabled"" ] } ``` Running it manually on the server gives: ``` # a2enmod php5.6 Considering dependency mpm_prefork for php5.6: Considering conflict mpm_event for mpm_prefork: Considering conflict mpm_worker for mpm_prefork: Module mpm_prefork already enabled Considering conflict php5 for php5.6: Module php5.6 already enabled # echo $? 0 ``` This is php5.6.load: ``` # Conflicts: php5 # Depends: mpm_prefork LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so ``` Note that manually running ""a2enmod php5.6"" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex? What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module? It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file.",True,"apache2_module fails for PHP 5.6 even though it is already enabled - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /Users/nick/Workspace/-redacted-/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION hostfile & roles_path ##### OS / ENVIRONMENT Running Ansible on macOS Sierra, target server is Ubuntu Xenial ##### SUMMARY Enabling the Apache2 module ""[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)"" with apache2_module fails even though the module is already enabled. This is the same problem as #5559 and #4744 but with a different package. This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`. ##### STEPS TO REPRODUCE ``` - name: Enable PHP 5.6 apache2_module: state=present name=php5.6 ``` ##### ACTUAL RESULTS ``` failed: [nicksherlock.com] (item=php5.6) => { ""failed"": true, ""invocation"": { ""module_args"": { ""force"": false, ""name"": ""php5.6"", ""state"": ""present"" }, ""module_name"": ""apache2_module"" }, ""item"": ""php5.6"", ""msg"": ""Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""rc"": 0, ""stderr"": """", ""stdout"": ""Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n"", ""stdout_lines"": [ ""Considering dependency mpm_prefork for php5.6:"", ""Considering conflict mpm_event for mpm_prefork:"", ""Considering conflict mpm_worker for mpm_prefork:"", ""Module mpm_prefork already enabled"", ""Considering conflict php5 for php5.6:"", ""Module php5.6 already enabled"" ] } ``` Running it manually on the server gives: ``` # a2enmod php5.6 Considering dependency mpm_prefork for php5.6: Considering conflict mpm_event for mpm_prefork: Considering conflict mpm_worker for mpm_prefork: Module mpm_prefork already enabled Considering conflict php5 for php5.6: Module php5.6 already enabled # echo $? 0 ``` This is php5.6.load: ``` # Conflicts: php5 # Depends: mpm_prefork LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so ``` Note that manually running ""a2enmod php5.6"" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex? What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module? It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file.",1, module fails for php even though it is already enabled issue type bug report component name module ansible version ansible config file users nick workspace redacted ansible cfg configured module search path default w o overrides configuration hostfile roles path os environment running ansible on macos sierra target server is ubuntu xenial summary enabling the module with module fails even though the module is already enabled this is the same problem as and but with a different package this module is called but identifies itself in m as module steps to reproduce name enable php module state present name actual results failed item failed true invocation module args force false name state present module name module item msg failed to set module to enabled considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n rc stderr stdout considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n stdout lines considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled running it manually on the server gives considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled echo this is load conflicts depends mpm prefork loadmodule module usr lib modules so note that manually running on the server directly gives a exit status to signal success can t module just check that instead of doing parsing with a regex what if i wanted several sets of conf files in mods available for the same module e g php prod load php dev load both loading the same module but with different config wouldn t that make it impossible for ansible to manage those with module it just seems odd that ansible requires that the module s binary name be the same as the name of its load file ,1 218566,24376064675.0,IssuesEvent,2022-10-04 01:04:59,joshnewton31080/WebGoat,https://api.github.com/repos/joshnewton31080/WebGoat,opened,CVE-2022-42004 (Medium) detected in jackson-databind-2.12.4.jar,security vulnerability,"## CVE-2022-42004 - Medium Severity Vulnerability
Vulnerable Library - jackson-databind-2.12.4.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /webgoat-server/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar

Dependency Hierarchy: - jjwt-0.9.1.jar (Root Library) - :x: **jackson-databind-2.12.4.jar** (Vulnerable Library)

Found in base branch: develop

Vulnerability Details

In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.

Publish Date: 2022-10-02

URL: CVE-2022-42004

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2022-10-02

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4

",True,"CVE-2022-42004 (Medium) detected in jackson-databind-2.12.4.jar - ## CVE-2022-42004 - Medium Severity Vulnerability
Vulnerable Library - jackson-databind-2.12.4.jar

General data-binding functionality for Jackson: works on core streaming API

Library home page: http://github.com/FasterXML/jackson

Path to dependency file: /webgoat-server/pom.xml

Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar

Dependency Hierarchy: - jjwt-0.9.1.jar (Root Library) - :x: **jackson-databind-2.12.4.jar** (Vulnerable Library)

Found in base branch: develop

Vulnerability Details

In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.

Publish Date: 2022-10-02

URL: CVE-2022-42004

CVSS 3 Score Details (5.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

Suggested Fix

Type: Upgrade version

Release Date: 2022-10-02

Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4

",0,cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file webgoat server pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jjwt jar root library x jackson databind jar vulnerable library found in base branch develop vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind ,0 311230,9530712246.0,IssuesEvent,2019-04-29 14:27:56,fedora-infra/bodhi,https://api.github.com/repos/fedora-infra/bodhi,reopened,[S] - API users should be able to create updates based on Koji side tags,API High priority RFE WebUI,"Bodhi currently allows users to create updates by specifying a list of builds via the API. In order to support Rawhide gating without addition additional steps to packagers for Rawhide packaging*, we will need to provide a way for packagers to create multi-build updates. In #3007 we are planning to automatically create updates when we see builds tagged into particular Koji tags. That will work for single build updates, but packagers often need to update multiple packages atomically due to build dependencies. The plan here is to allow packagers to work in Koji side tags to do their builds. When they are ready to publish their work, they will ask Bodhi to create an update based on that side tag. Bodhi will then query Koji to get the needed information and create an update with the builds found there. The API needs to allow packagers to create updates based on NVRs, as they do today, or based on koji side tag names. When packagers use a side tag, Koji should be queried to find the list of NVRs, and then Bodhi can proceed as it would have before. * Packagers are not used to dealing with Bodhi for Rawhide. We could just start using Bodhi for Rawhide today, but many packagers would be confused and some may dislike the extra step. This plan is about trying to minimize any extra burden placed on packagers as we add gating to Rawhide.",1.0,"[S] - API users should be able to create updates based on Koji side tags - Bodhi currently allows users to create updates by specifying a list of builds via the API. In order to support Rawhide gating without addition additional steps to packagers for Rawhide packaging*, we will need to provide a way for packagers to create multi-build updates. In #3007 we are planning to automatically create updates when we see builds tagged into particular Koji tags. That will work for single build updates, but packagers often need to update multiple packages atomically due to build dependencies. The plan here is to allow packagers to work in Koji side tags to do their builds. When they are ready to publish their work, they will ask Bodhi to create an update based on that side tag. Bodhi will then query Koji to get the needed information and create an update with the builds found there. The API needs to allow packagers to create updates based on NVRs, as they do today, or based on koji side tag names. When packagers use a side tag, Koji should be queried to find the list of NVRs, and then Bodhi can proceed as it would have before. * Packagers are not used to dealing with Bodhi for Rawhide. We could just start using Bodhi for Rawhide today, but many packagers would be confused and some may dislike the extra step. This plan is about trying to minimize any extra burden placed on packagers as we add gating to Rawhide.",0, api users should be able to create updates based on koji side tags bodhi currently allows users to create updates by specifying a list of builds via the api in order to support rawhide gating without addition additional steps to packagers for rawhide packaging we will need to provide a way for packagers to create multi build updates in we are planning to automatically create updates when we see builds tagged into particular koji tags that will work for single build updates but packagers often need to update multiple packages atomically due to build dependencies the plan here is to allow packagers to work in koji side tags to do their builds when they are ready to publish their work they will ask bodhi to create an update based on that side tag bodhi will then query koji to get the needed information and create an update with the builds found there the api needs to allow packagers to create updates based on nvrs as they do today or based on koji side tag names when packagers use a side tag koji should be queried to find the list of nvrs and then bodhi can proceed as it would have before packagers are not used to dealing with bodhi for rawhide we could just start using bodhi for rawhide today but many packagers would be confused and some may dislike the extra step this plan is about trying to minimize any extra burden placed on packagers as we add gating to rawhide ,0 17066,23542658287.0,IssuesEvent,2022-08-20 16:43:13,sekiguchi-nagisa/ydsh,https://api.github.com/repos/sekiguchi-nagisa/ydsh,closed,control space insertion behavior in linenoise,incompatible change Interactive API,"currently, if size of completion candidates is 1, always insert space after inserting a candidate. in some situation, space insertion is not needed (ex. variable, field, method name completion). need to change `DSState_complete` api",True,"control space insertion behavior in linenoise - currently, if size of completion candidates is 1, always insert space after inserting a candidate. in some situation, space insertion is not needed (ex. variable, field, method name completion). need to change `DSState_complete` api",0,control space insertion behavior in linenoise currently if size of completion candidates is always insert space after inserting a candidate in some situation space insertion is not needed ex variable field method name completion need to change dsstate complete api,0 957,4702293769.0,IssuesEvent,2016-10-13 01:20:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,pn_vlan.py parameter parsing error,affects_2.3 bug_report networking P1 waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pn_vlan ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel aa1ec8af17) last updated 2016/10/07 11:07:23 (GMT +100) lib/ansible/modules/core: (devel 149f10f8b7) last updated 2016/10/07 11:07:26 (GMT +100) lib/ansible/modules/extras: (devel cc2651422a) last updated 2016/10/07 11:07:27 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/blob/devel/network/netvisor/pn_vlan.py#L225 contains ```python arguement_spec = pn_arguement_spec ``` which gives a NameError exception ##### STEPS TO REPRODUCE ```python ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",True,"pn_vlan.py parameter parsing error - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pn_vlan ##### ANSIBLE VERSION ``` ansible 2.3.0 (devel aa1ec8af17) last updated 2016/10/07 11:07:23 (GMT +100) lib/ansible/modules/core: (devel 149f10f8b7) last updated 2016/10/07 11:07:26 (GMT +100) lib/ansible/modules/extras: (devel cc2651422a) last updated 2016/10/07 11:07:27 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY https://github.com/ansible/ansible-modules-core/blob/devel/network/netvisor/pn_vlan.py#L225 contains ```python arguement_spec = pn_arguement_spec ``` which gives a NameError exception ##### STEPS TO REPRODUCE ```python ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ",1,pn vlan py parameter parsing error issue type bug report component name pn vlan ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary contains python arguement spec pn arguement spec which gives a nameerror exception steps to reproduce python expected results actual results ,1 56794,23904369190.0,IssuesEvent,2022-09-08 22:15:06,MicrosoftDocs/windowsserverdocs,https://api.github.com/repos/MicrosoftDocs/windowsserverdocs,closed,Can't follow directions on thispage,Pri1 windows-server/prod remote-desktop-services/tech," ""From the Connection Center, tap the overflow menu (...) on the command bar at the top of the client."" Where is the Connection center? What does it look like? I don't see (...). I'm stuck. Please update instructions with screenshots. Thank you. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 09ab98fc-fea2-a11a-e5ca-de1430211d97 * Version Independent ID: 4d9943c3-fef7-ca86-bd03-1241a04b8135 * Content: [Get started with the Windows Desktop client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client) * Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md) * Product: **windows-server** * Technology: **remote-desktop-services** * GitHub Login: @Heidilohr * Microsoft Alias: **helohr**",1.0,"Can't follow directions on thispage - ""From the Connection Center, tap the overflow menu (...) on the command bar at the top of the client."" Where is the Connection center? What does it look like? I don't see (...). I'm stuck. Please update instructions with screenshots. Thank you. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 09ab98fc-fea2-a11a-e5ca-de1430211d97 * Version Independent ID: 4d9943c3-fef7-ca86-bd03-1241a04b8135 * Content: [Get started with the Windows Desktop client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client) * Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md) * Product: **windows-server** * Technology: **remote-desktop-services** * GitHub Login: @Heidilohr * Microsoft Alias: **helohr**",0,can t follow directions on thispage from the connection center tap the overflow menu on the command bar at the top of the client where is the connection center what does it look like i don t see i m stuck please update instructions with screenshots thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product windows server technology remote desktop services github login heidilohr microsoft alias helohr ,0 1845,6577385115.0,IssuesEvent,2017-09-12 00:32:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Yum module should support disableexcludes option,affects_2.0 feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### SUMMARY Core module yum, should support more yum options. Disabling excluded packages in repository (--disableexcludes) is a handy one. ",True,"Yum module should support disableexcludes option - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### SUMMARY Core module yum, should support more yum options. Disabling excluded packages in repository (--disableexcludes) is a handy one. ",1,yum module should support disableexcludes option issue type feature idea component name yum module ansible version ansible summary core module yum should support more yum options disabling excluded packages in repository disableexcludes is a handy one ,1 375796,11134803794.0,IssuesEvent,2019-12-20 12:49:09,visual-framework/vf-core,https://api.github.com/repos/visual-framework/vf-core,closed,ebi-vf1-integration component additions,Priority: High Status: WIP Type: Bug,"In the `vf-wp` WordPress theme I'm seeing CSS specificity issues with `ebi-global.css` overriding the friendlier `vf-core` styles. Logging those as I notice them in this issue below.",1.0,"ebi-vf1-integration component additions - In the `vf-wp` WordPress theme I'm seeing CSS specificity issues with `ebi-global.css` overriding the friendlier `vf-core` styles. Logging those as I notice them in this issue below.",0,ebi integration component additions in the vf wp wordpress theme i m seeing css specificity issues with ebi global css overriding the friendlier vf core styles logging those as i notice them in this issue below ,0 839,4479326075.0,IssuesEvent,2016-08-27 14:51:18,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git module SHA version not work for clone,docs_report P3 waiting_on_maintainer,"In documentation: What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD, a branch name, or a tag name. But it not work for clone, because `git clone` not support this. So in this case better note about this in documentation. ",True,"git module SHA version not work for clone - In documentation: What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD, a branch name, or a tag name. But it not work for clone, because `git clone` not support this. So in this case better note about this in documentation. ",1,git module sha version not work for clone in documentation what version of the repository to check out this can be the full character sha hash the literal string head a branch name or a tag name but it not work for clone because git clone not support this so in this case better note about this in documentation ,1 1053,4863884893.0,IssuesEvent,2016-11-14 16:31:50,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ec2 module: spot_wait_timeout exceeded, but spot instance launched anyway",affects_1.9 aws bug_report cloud waiting_on_maintainer,"Issue Type: Bug Report Ansible Version: 1.9.3 from https://launchpad.net/~ansible/+archive/ubuntu/ansible 2.0.0 (devel f0efe1ecb0) Ansible Configuration: default \ as installed Environment: Ubuntu 14.04 Summary: spot_wait_timeout exceeded and ec2 task failed, but spot instance launched anyway because spot request is not canceled on failure Steps To Reproduce: 1. Run playbook https://gist.github.com/kai11/09d9bb952d422348a006 2. Playbook will fail with message ""msg: wait for spot requests timeout on ..."" 3. Check spot requests in AWS console - t1.micro will be open and eventually will be converted to instance without any tags Expected Results: Cancel sport request on spot_wait_timeout Actual Results: Spot request still open ",True,"ec2 module: spot_wait_timeout exceeded, but spot instance launched anyway - Issue Type: Bug Report Ansible Version: 1.9.3 from https://launchpad.net/~ansible/+archive/ubuntu/ansible 2.0.0 (devel f0efe1ecb0) Ansible Configuration: default \ as installed Environment: Ubuntu 14.04 Summary: spot_wait_timeout exceeded and ec2 task failed, but spot instance launched anyway because spot request is not canceled on failure Steps To Reproduce: 1. Run playbook https://gist.github.com/kai11/09d9bb952d422348a006 2. Playbook will fail with message ""msg: wait for spot requests timeout on ..."" 3. Check spot requests in AWS console - t1.micro will be open and eventually will be converted to instance without any tags Expected Results: Cancel sport request on spot_wait_timeout Actual Results: Spot request still open ",1, module spot wait timeout exceeded but spot instance launched anyway issue type bug report ansible version from devel ansible configuration default as installed environment ubuntu summary spot wait timeout exceeded and task failed but spot instance launched anyway because spot request is not canceled on failure steps to reproduce run playbook playbook will fail with message msg wait for spot requests timeout on check spot requests in aws console micro will be open and eventually will be converted to instance without any tags expected results cancel sport request on spot wait timeout actual results spot request still open ,1 1831,6577356939.0,IssuesEvent,2017-09-12 00:20:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"linode module: ""name"" parameter required, but documentation says it isn't",affects_2.1 bug_report cloud docs_report waiting_on_maintainer,"##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME linode module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible from OS X 10.11.5 ##### SUMMARY The documentation for the linode module at http://docs.ansible.com/ansible/linode_module.html claims that the ""name"" parameter is not required, but I seem to be unable to successfully use the linode module without it, it says ""name is required for active state"". (P.S. I don't actually understand what value the ""name"" parameter should have, but using my server's hostname ""caprice"" makes the playbook run fine. What is the purpose of the ""name"" parameter, and how is it different from the ""linode_id"" parameter?) ##### STEPS TO REPRODUCE Here's a sample playbook named ""reboot.yml"": ``` --- - hosts: caprice tasks: - name: Reboot the server local_action: module: linode api_key: ""{{ linode_api_key }}"" # name: caprice linode_id: ""{{ linode_id }}"" state: restarted ``` ##### EXPECTED RESULTS I expected the playbook to run successfully without the ""name"" parameter. ##### ACTUAL RESULTS ``` Vin:ansible nelson$ ansible-playbook reboot.yml --ask-vault-pass -vvvv No config file found; using defaults Vault password: Loaded callback default of type stdout, v2.0 PLAYBOOK: reboot.yml *********************************************************** 1 plays in reboot.yml PLAY [caprice] ***************************************************************** TASK [setup] ******************************************************************* ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r caprice '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" && echo ansible-tmp-1465211261.42-104881048472456=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" ) && sleep 0'""'""'' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpjrQJWt TO /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r '[caprice]' ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r -tt caprice '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup; rm -rf ""/home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/"" > /dev/null 2>&1 && sleep 0'""'""'' ok: [caprice] TASK [Reboot the server] ******************************************************* task path: /Users/nelson/Code/server_documents/ansible/reboot.yml:5 ESTABLISH LOCAL CONNECTION FOR USER: nelson EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" && echo ansible-tmp-1465211263.04-60988620546823=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" ) && sleep 0' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpMBOJkL TO /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode; rm -rf ""/Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/"" > /dev/null 2>&1 && sleep 0' fatal: [caprice -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_key"": ""vMQG7JAhCOKxDkVogfBVMg6vMwxiow0P0Q2Pt4XSOb566Bvt6yKFFhuDyBzGYw6V"", ""datacenter"": null, ""distribution"": null, ""linode_id"": 1814698, ""name"": null, ""password"": null, ""payment_term"": 1, ""plan"": null, ""ssh_pub_key"": null, ""state"": ""restarted"", ""swap"": 512, ""wait"": true, ""wait_timeout"": ""300""}, ""module_name"": ""linode""}, ""msg"": ""name is required for active state""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @reboot.retry PLAY RECAP ********************************************************************* caprice : ok=1 changed=0 unreachable=0 failed=1 Vin:ansible nelson$ ``` ",True,"linode module: ""name"" parameter required, but documentation says it isn't - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME linode module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible from OS X 10.11.5 ##### SUMMARY The documentation for the linode module at http://docs.ansible.com/ansible/linode_module.html claims that the ""name"" parameter is not required, but I seem to be unable to successfully use the linode module without it, it says ""name is required for active state"". (P.S. I don't actually understand what value the ""name"" parameter should have, but using my server's hostname ""caprice"" makes the playbook run fine. What is the purpose of the ""name"" parameter, and how is it different from the ""linode_id"" parameter?) ##### STEPS TO REPRODUCE Here's a sample playbook named ""reboot.yml"": ``` --- - hosts: caprice tasks: - name: Reboot the server local_action: module: linode api_key: ""{{ linode_api_key }}"" # name: caprice linode_id: ""{{ linode_id }}"" state: restarted ``` ##### EXPECTED RESULTS I expected the playbook to run successfully without the ""name"" parameter. ##### ACTUAL RESULTS ``` Vin:ansible nelson$ ansible-playbook reboot.yml --ask-vault-pass -vvvv No config file found; using defaults Vault password: Loaded callback default of type stdout, v2.0 PLAYBOOK: reboot.yml *********************************************************** 1 plays in reboot.yml PLAY [caprice] ***************************************************************** TASK [setup] ******************************************************************* ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r caprice '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" && echo ansible-tmp-1465211261.42-104881048472456=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `"" ) && sleep 0'""'""'' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpjrQJWt TO /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r '[caprice]' ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r -tt caprice '/bin/sh -c '""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup; rm -rf ""/home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/"" > /dev/null 2>&1 && sleep 0'""'""'' ok: [caprice] TASK [Reboot the server] ******************************************************* task path: /Users/nelson/Code/server_documents/ansible/reboot.yml:5 ESTABLISH LOCAL CONNECTION FOR USER: nelson EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" && echo ansible-tmp-1465211263.04-60988620546823=""` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `"" ) && sleep 0' PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpMBOJkL TO /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode; rm -rf ""/Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/"" > /dev/null 2>&1 && sleep 0' fatal: [caprice -> localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""api_key"": ""vMQG7JAhCOKxDkVogfBVMg6vMwxiow0P0Q2Pt4XSOb566Bvt6yKFFhuDyBzGYw6V"", ""datacenter"": null, ""distribution"": null, ""linode_id"": 1814698, ""name"": null, ""password"": null, ""payment_term"": 1, ""plan"": null, ""ssh_pub_key"": null, ""state"": ""restarted"", ""swap"": 512, ""wait"": true, ""wait_timeout"": ""300""}, ""module_name"": ""linode""}, ""msg"": ""name is required for active state""} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @reboot.retry PLAY RECAP ********************************************************************* caprice : ok=1 changed=0 unreachable=0 failed=1 Vin:ansible nelson$ ``` ",1,linode module name parameter required but documentation says it isn t issue type documentation report component name linode module ansible version ansible config file configured module search path default w o overrides configuration os environment running ansible from os x summary the documentation for the linode module at claims that the name parameter is not required but i seem to be unable to successfully use the linode module without it it says name is required for active state p s i don t actually understand what value the name parameter should have but using my server s hostname caprice makes the playbook run fine what is the purpose of the name parameter and how is it different from the linode id parameter steps to reproduce here s a sample playbook named reboot yml hosts caprice tasks name reboot the server local action module linode api key linode api key name caprice linode id linode id state restarted expected results i expected the playbook to run successfully without the name parameter actual results vin ansible nelson ansible playbook reboot yml ask vault pass vvvv no config file found using defaults vault password loaded callback default of type stdout playbook reboot yml plays in reboot yml play task establish ssh connection for user none ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r caprice bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders wj fj t tmpjrqjwt to home nelson ansible tmp ansible tmp setup ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r establish ssh connection for user none ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r tt caprice bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home nelson ansible tmp ansible tmp setup rm rf home nelson ansible tmp ansible tmp dev null sleep ok task task path users nelson code server documents ansible reboot yml establish local connection for user nelson exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders wj fj t tmpmbojkl to users nelson ansible tmp ansible tmp linode exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users nelson ansible tmp ansible tmp linode rm rf users nelson ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api key datacenter null distribution null linode id name null password null payment term plan null ssh pub key null state restarted swap wait true wait timeout module name linode msg name is required for active state no more hosts left to retry use limit reboot retry play recap caprice ok changed unreachable failed vin ansible nelson ,1 908,4577132249.0,IssuesEvent,2016-09-17 01:43:50,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Guest Customization template in vSphere_Guest module,affects_2.1 cloud feature_idea vmware waiting_on_maintainer," ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` 2.1.1 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Does the vsphere_guest module support applying a guest customization template while deploying a VM from template? ",True,"Guest Customization template in vSphere_Guest module - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` 2.1.1 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Does the vsphere_guest module support applying a guest customization template while deploying a VM from template? ",1,guest customization template in vsphere guest module issue type feature idea component name vsphere guest ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary does the vsphere guest module support applying a guest customization template while deploying a vm from template ,1 2988,3995617790.0,IssuesEvent,2016-05-10 16:02:29,Comcast/traffic_control,https://api.github.com/repos/Comcast/traffic_control,closed,TC: Ansible Playbooks - Kibana,enhancement Infrastructure,"Add Ansible playbook for Kibana Acceptance Criteria - Add Kibana Role - Test that playbooks run correctly - Document",1.0,"TC: Ansible Playbooks - Kibana - Add Ansible playbook for Kibana Acceptance Criteria - Add Kibana Role - Test that playbooks run correctly - Document",0,tc ansible playbooks kibana add ansible playbook for kibana acceptance criteria add kibana role test that playbooks run correctly document,0 1870,6577493454.0,IssuesEvent,2017-09-12 01:18:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,os_router can't take in port id as interface,affects_2.0 cloud feature_idea openstack waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_router ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /root/setup-infra/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes made to ansible.cfg ##### OS / ENVIRONMENT I'm running Ubuntu 14.04, but this module is not platform-specific I don't think. ##### SUMMARY os_router can't take in a port ID as an internal interface, only a subnet. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L321 The neutron CLI allows you to specify a port ID as an interface, and therefore allow you to specify an arbitrary IP for that interface. It would be nice if the Ansible os_router module would allow you to do that. ##### STEPS TO REPRODUCE This added feature would allow you to do something like: ``` - name: Create port for my_net os_port: state: present name: ""my_net_port"" network: ""my_net"" fixed_ips: - ip_address: ""192.168.100.50"" register: my_net_port_results - name: Create my router os_router: name: my_router state: present network: ""ext-net"" interfaces: - port: ""{{ my_net_port_results.id }}"" - ""some_other_priv_subnet"" ``` This would allow the user to specify either a subnet or a port for a router internal interface. ##### EXPECTED RESULTS The router would have two interfaces with the example playbook shown above. It would have the default gateway of ""some_other_priv_subnet"", and it would have the ip assigned to ""my_net_port"". This would allow subnets to be attached to multiple routers, which currently isn't do-able through the os_router module. ##### ACTUAL RESULTS TBD ",True,"os_router can't take in port id as interface - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME os_router ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /root/setup-infra/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION No changes made to ansible.cfg ##### OS / ENVIRONMENT I'm running Ubuntu 14.04, but this module is not platform-specific I don't think. ##### SUMMARY os_router can't take in a port ID as an internal interface, only a subnet. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L321 The neutron CLI allows you to specify a port ID as an interface, and therefore allow you to specify an arbitrary IP for that interface. It would be nice if the Ansible os_router module would allow you to do that. ##### STEPS TO REPRODUCE This added feature would allow you to do something like: ``` - name: Create port for my_net os_port: state: present name: ""my_net_port"" network: ""my_net"" fixed_ips: - ip_address: ""192.168.100.50"" register: my_net_port_results - name: Create my router os_router: name: my_router state: present network: ""ext-net"" interfaces: - port: ""{{ my_net_port_results.id }}"" - ""some_other_priv_subnet"" ``` This would allow the user to specify either a subnet or a port for a router internal interface. ##### EXPECTED RESULTS The router would have two interfaces with the example playbook shown above. It would have the default gateway of ""some_other_priv_subnet"", and it would have the ip assigned to ""my_net_port"". This would allow subnets to be attached to multiple routers, which currently isn't do-able through the os_router module. ##### ACTUAL RESULTS TBD ",1,os router can t take in port id as interface issue type feature idea component name os router ansible version ansible config file root setup infra ansible cfg configured module search path default w o overrides configuration no changes made to ansible cfg os environment i m running ubuntu but this module is not platform specific i don t think summary os router can t take in a port id as an internal interface only a subnet see the neutron cli allows you to specify a port id as an interface and therefore allow you to specify an arbitrary ip for that interface it would be nice if the ansible os router module would allow you to do that steps to reproduce this added feature would allow you to do something like name create port for my net os port state present name my net port network my net fixed ips ip address register my net port results name create my router os router name my router state present network ext net interfaces port my net port results id some other priv subnet this would allow the user to specify either a subnet or a port for a router internal interface expected results the router would have two interfaces with the example playbook shown above it would have the default gateway of some other priv subnet and it would have the ip assigned to my net port this would allow subnets to be attached to multiple routers which currently isn t do able through the os router module actual results tbd ,1 730825,25190570690.0,IssuesEvent,2022-11-12 00:01:48,simonbaird/tiddlyhost,https://api.github.com/repos/simonbaird/tiddlyhost,closed,Feature request: Clone wiki,priority,"Shortly after the noob stage, a TW user identifies ""favourite plugins, settings and customizations"" that he wants for all his wikis. I dare say this happens to *all* TW users. And those that go deeper into tiddlyverse probably develop fine tuned recurring setups (e.g *public* vs *private* wikis, *work* vs *non-work* etc). But to manually drag'n drop plugins + modified shadow tids and other tidbits is a rather annoying task. Therefore, I wonder if TH could feature a simple ""Clone wiki"" feature in the *Your sites* page. It could appear as a menu option in the ""Actions"" button and it can lead to the same page as the ""Create site"" button, i.e to register a new TH site, but instead of an empty wiki it is a clone. Thoughts?",1.0,"Feature request: Clone wiki - Shortly after the noob stage, a TW user identifies ""favourite plugins, settings and customizations"" that he wants for all his wikis. I dare say this happens to *all* TW users. And those that go deeper into tiddlyverse probably develop fine tuned recurring setups (e.g *public* vs *private* wikis, *work* vs *non-work* etc). But to manually drag'n drop plugins + modified shadow tids and other tidbits is a rather annoying task. Therefore, I wonder if TH could feature a simple ""Clone wiki"" feature in the *Your sites* page. It could appear as a menu option in the ""Actions"" button and it can lead to the same page as the ""Create site"" button, i.e to register a new TH site, but instead of an empty wiki it is a clone. Thoughts?",0,feature request clone wiki shortly after the noob stage a tw user identifies favourite plugins settings and customizations that he wants for all his wikis i dare say this happens to all tw users and those that go deeper into tiddlyverse probably develop fine tuned recurring setups e g public vs private wikis work vs non work etc but to manually drag n drop plugins modified shadow tids and other tidbits is a rather annoying task therefore i wonder if th could feature a simple clone wiki feature in the your sites page it could appear as a menu option in the actions button and it can lead to the same page as the create site button i e to register a new th site but instead of an empty wiki it is a clone thoughts ,0 41665,10562908789.0,IssuesEvent,2019-10-04 19:32:31,hazelcast/hazelcast,https://api.github.com/repos/hazelcast/hazelcast,closed,Setting an Exception in an IAtomicReference causes it to be thrown when getting it,Internal breaking change Module: IAtomicReference Source: Community Team: Core Type: Defect,"I'm using an Hazelcast **IAtomicReference** to store a **reference to an Exception** object. When I try to get the stored **Exception**, it is **thrown** rather than returned. Is it a **desired behaviour** (I couldn't find any mention to it in the documentation) **or** is it a **defect**? ```Java @Test(expected=RuntimeException.class) public void testExceptionInAtomicReference { Config config = new Config(); HazelcastInstance hz = Hazelcast.newHazelcastInstance(config); Exception e = new RuntimeException(""my exception""); IAtomicReference ref = hz.getAtomicReference(""excRef""); ref.set(e); Exception exception = ref.get(); // <--- This throws the exception stored rather than returning it } ```",1.0,"Setting an Exception in an IAtomicReference causes it to be thrown when getting it - I'm using an Hazelcast **IAtomicReference** to store a **reference to an Exception** object. When I try to get the stored **Exception**, it is **thrown** rather than returned. Is it a **desired behaviour** (I couldn't find any mention to it in the documentation) **or** is it a **defect**? ```Java @Test(expected=RuntimeException.class) public void testExceptionInAtomicReference { Config config = new Config(); HazelcastInstance hz = Hazelcast.newHazelcastInstance(config); Exception e = new RuntimeException(""my exception""); IAtomicReference ref = hz.getAtomicReference(""excRef""); ref.set(e); Exception exception = ref.get(); // <--- This throws the exception stored rather than returning it } ```",0,setting an exception in an iatomicreference causes it to be thrown when getting it i m using an hazelcast iatomicreference to store a reference to an exception object when i try to get the stored exception it is thrown rather than returned is it a desired behaviour i couldn t find any mention to it in the documentation or is it a defect java test expected runtimeexception class public void testexceptioninatomicreference config config new config hazelcastinstance hz hazelcast newhazelcastinstance config exception e new runtimeexception my exception iatomicreference ref hz getatomicreference excref ref set e exception exception ref get this throws the exception stored rather than returning it ,0 1835,6577363973.0,IssuesEvent,2017-09-12 00:23:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,user module fails on SLES11 SP1-SP3,affects_2.0 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME user ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Source: N/A, happens on RHEL, OSX Target: SLES11 SP1-SP3 ##### SUMMARY user module fails when user does not exist but group does. ##### STEPS TO REPRODUCE ``` - name: configure usergroup group: name: usergroup gid: 60003 state: present - name: configure user account user: name: user shell: /bin/bash skeleton: /etc/skel password: ""{{ password }}"" groups: usergroup append: true no_log: true ``` ##### EXPECTED RESULTS User is created and added to the indicated group ##### ACTUAL RESULTS ``` fatal: [targethost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""sysadm"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""user"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": ""/etc/skel"", ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on sonata"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": ""60004"", ""update_password"": ""always""}, ""module_name"": ""user""}, ""msg"": ""/usr/sbin/useradd: invalid option -- 'N'\nTry `useradd --help' or `useradd --usage' for more information.\n"", ""name"": ""sysadm"", ""rc"": 2} ``` see: https://github.com/ansible/ansible-modules-core/blob/76b7de943b065a831fe8639aa0348ebceee1ae02/system/user.py#L345 looks like ansible defaults to appending -N to the useradd command when the system is not redhat, but suse11 sp1-sp3 do not support the -N flag ",True,"user module fails on SLES11 SP1-SP3 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME user ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Source: N/A, happens on RHEL, OSX Target: SLES11 SP1-SP3 ##### SUMMARY user module fails when user does not exist but group does. ##### STEPS TO REPRODUCE ``` - name: configure usergroup group: name: usergroup gid: 60003 state: present - name: configure user account user: name: user shell: /bin/bash skeleton: /etc/skel password: ""{{ password }}"" groups: usergroup append: true no_log: true ``` ##### EXPECTED RESULTS User is created and added to the indicated group ##### ACTUAL RESULTS ``` fatal: [targethost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""sysadm"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""user"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": ""/etc/skel"", ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on sonata"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": ""60004"", ""update_password"": ""always""}, ""module_name"": ""user""}, ""msg"": ""/usr/sbin/useradd: invalid option -- 'N'\nTry `useradd --help' or `useradd --usage' for more information.\n"", ""name"": ""sysadm"", ""rc"": 2} ``` see: https://github.com/ansible/ansible-modules-core/blob/76b7de943b065a831fe8639aa0348ebceee1ae02/system/user.py#L345 looks like ansible defaults to appending -N to the useradd command when the system is not redhat, but suse11 sp1-sp3 do not support the -N flag ",1,user module fails on issue type bug report component name user ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration deprecation warnings false os environment source n a happens on rhel osx target summary user module fails when user does not exist but group does steps to reproduce name configure usergroup group name usergroup gid state present name configure user account user name user shell bin bash skeleton etc skel password password groups usergroup append true no log true expected results user is created and added to the indicated group actual results fatal failed changed false failed true invocation module args append true comment null createhome true expires null force false generate ssh key null group null groups sysadm home null login class null move home false name user non unique false password value specified in no log parameter remove false shell bin bash skeleton etc skel ssh key bits ssh key comment ansible generated on sonata ssh key file null ssh key passphrase null ssh key type rsa state present system false uid update password always module name user msg usr sbin useradd invalid option n ntry useradd help or useradd usage for more information n name sysadm rc see looks like ansible defaults to appending n to the useradd command when the system is not redhat but do not support the n flag ,1 1747,6574941788.0,IssuesEvent,2017-09-11 14:33:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,vsphere_guest: index out of range exception while reconfiguring disk size,affects_2.1 bug_report cloud vmware waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_module_vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Centos7 ##### SUMMARY got index out of range exception while configuring vm with vsphere_guest ##### STEPS TO REPRODUCE ``` --- - hosts: localhost gather_facts: false connection: local roles: - vm_create ... --- # tasks file for vm_create - name: check for dependency python-pip yum: name=""{{item}}"" state=latest with_items: - python-pip - name: check for dependencies pip: name=""{{item}}"" state=latest with_items: - pysphere - pyvmomi - name: create vm from template vsphere_guest: vcenter_hostname: ""{{vcenter_hostname}}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" from_template: yes template_src: ""{{ vm_template }}"" cluster: ""{{ cluster }}"" resource_pool: ""{{ resource_pool }}"" power_on_after_clone: ""no"" tags: - create - name: reconfigure vm vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" state: reconfigured vm_extra_config: notes: ""created with ansible vsphere"" vm_disk: disk1: size_gb: ""{{ disk_main }}"" type: thin datastore: ""{{ datastore }}"" disk2: size_gb: ""{{ disk_var }}"" type: thin datastore: ""{{ datastore }}"" disk3: size_gb: ""{{ disk_opt }}"" type: thin datastore: ""{{ datastore }}"" disk4: size_gb: ""{{ disk_home }}"" type: thin datastore: ""{{ datastore }}"" vm_nic: nic1: type: ""vmxnet3"" network: ""VM Network"" network_type: ""standard"" vm_hardware: memory_mb: ""{{ memory }}"" num_cpus: ""{{ cpucount }}"" osid: ""{{ osid }}"" scsi: paravirtual esxi: datacenter: ""{{ datacenter }}"" hostname: ""{{ esxi_host }}"" ... ``` ##### EXPECTED RESULTS normal playthrough with reconfigured disk-sizes ##### ACTUAL RESULTS creating vm from template works fine, but reconfiguring fails with exception ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1879, in main() File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1806, in main force=force File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 842, in reconfigure_vm module, vm_disk, changes) File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 773, in update_disks hdd_id = vm._devices[dev_key]['label'].split()[2] IndexError: list index out of range ``` ",True,"vsphere_guest: index out of range exception while reconfiguring disk size - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_module_vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Centos7 ##### SUMMARY got index out of range exception while configuring vm with vsphere_guest ##### STEPS TO REPRODUCE ``` --- - hosts: localhost gather_facts: false connection: local roles: - vm_create ... --- # tasks file for vm_create - name: check for dependency python-pip yum: name=""{{item}}"" state=latest with_items: - python-pip - name: check for dependencies pip: name=""{{item}}"" state=latest with_items: - pysphere - pyvmomi - name: create vm from template vsphere_guest: vcenter_hostname: ""{{vcenter_hostname}}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" from_template: yes template_src: ""{{ vm_template }}"" cluster: ""{{ cluster }}"" resource_pool: ""{{ resource_pool }}"" power_on_after_clone: ""no"" tags: - create - name: reconfigure vm vsphere_guest: vcenter_hostname: ""{{ vcenter_hostname }}"" username: ""{{ vcenter_user }}"" password: ""{{ vcenter_pass }}"" guest: ""test_01"" state: reconfigured vm_extra_config: notes: ""created with ansible vsphere"" vm_disk: disk1: size_gb: ""{{ disk_main }}"" type: thin datastore: ""{{ datastore }}"" disk2: size_gb: ""{{ disk_var }}"" type: thin datastore: ""{{ datastore }}"" disk3: size_gb: ""{{ disk_opt }}"" type: thin datastore: ""{{ datastore }}"" disk4: size_gb: ""{{ disk_home }}"" type: thin datastore: ""{{ datastore }}"" vm_nic: nic1: type: ""vmxnet3"" network: ""VM Network"" network_type: ""standard"" vm_hardware: memory_mb: ""{{ memory }}"" num_cpus: ""{{ cpucount }}"" osid: ""{{ osid }}"" scsi: paravirtual esxi: datacenter: ""{{ datacenter }}"" hostname: ""{{ esxi_host }}"" ... ``` ##### EXPECTED RESULTS normal playthrough with reconfigured disk-sizes ##### ACTUAL RESULTS creating vm from template works fine, but reconfiguring fails with exception ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1879, in main() File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 1806, in main force=force File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 842, in reconfigure_vm module, vm_disk, changes) File ""/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py"", line 773, in update_disks hdd_id = vm._devices[dev_key]['label'].split()[2] IndexError: list index out of range ``` ",1,vsphere guest index out of range exception while reconfiguring disk size issue type bug report component name ansible module vsphere guest ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment summary got index out of range exception while configuring vm with vsphere guest steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost gather facts false connection local roles vm create tasks file for vm create name check for dependency python pip yum name item state latest with items python pip name check for dependencies pip name item state latest with items pysphere pyvmomi name create vm from template vsphere guest vcenter hostname vcenter hostname username vcenter user password vcenter pass guest test from template yes template src vm template cluster cluster resource pool resource pool power on after clone no tags create name reconfigure vm vsphere guest vcenter hostname vcenter hostname username vcenter user password vcenter pass guest test state reconfigured vm extra config notes created with ansible vsphere vm disk size gb disk main type thin datastore datastore size gb disk var type thin datastore datastore size gb disk opt type thin datastore datastore size gb disk home type thin datastore datastore vm nic type network vm network network type standard vm hardware memory mb memory num cpus cpucount osid osid scsi paravirtual esxi datacenter datacenter hostname esxi host expected results normal playthrough with reconfigured disk sizes actual results creating vm from template works fine but reconfiguring fails with exception an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible bsledg ansible module vsphere guest py line in main file tmp ansible bsledg ansible module vsphere guest py line in main force force file tmp ansible bsledg ansible module vsphere guest py line in reconfigure vm module vm disk changes file tmp ansible bsledg ansible module vsphere guest py line in update disks hdd id vm devices split indexerror list index out of range ,1 1833,6577362666.0,IssuesEvent,2017-09-12 00:23:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,list supported devices for ios and nxos,affects_2.1 docs_report networking waiting_on_maintainer," ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_* nxos_* ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY It would be good to have a complete list of supported devices. For example, ansible state here https://www.ansible.com/press/red-hat-brings-devops-to-the-network-with-new-ansible-capabilities that IOS-XE is supported. I have tried running against an IOS-XE 4500X switch with the ios_command module and there is no Python interpreter installed on the switch for ansible to start. ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",True,"list supported devices for ios and nxos - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ios_* nxos_* ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /home/admin-0/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY It would be good to have a complete list of supported devices. For example, ansible state here https://www.ansible.com/press/red-hat-brings-devops-to-the-network-with-new-ansible-capabilities that IOS-XE is supported. I have tried running against an IOS-XE 4500X switch with the ios_command module and there is no Python interpreter installed on the switch for ansible to start. ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` ",1,list supported devices for ios and nxos issue type documentation report component name ios nxos ansible version ansible config file home admin ansible ansible cfg configured module search path default w o overrides configuration n a os environment n a summary it would be good to have a complete list of supported devices for example ansible state here that ios xe is supported i have tried running against an ios xe switch with the ios command module and there is no python interpreter installed on the switch for ansible to start steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results ,1 363204,25413313469.0,IssuesEvent,2022-11-22 21:06:09,ruthlennonatu/groot22,https://api.github.com/repos/ruthlennonatu/groot22,closed,As a customer I want to be able to use the product with ease so that my application process will be as simple as possible.,documentation enhancement,"Description: A merge request with the dev branch must be made and the documentation containing Information about automated Java Documentation Tools Acceptance Criteria: Resolve issue. DoD: Merge request Have a document containing where to find information on Java Documentation",1.0,"As a customer I want to be able to use the product with ease so that my application process will be as simple as possible. - Description: A merge request with the dev branch must be made and the documentation containing Information about automated Java Documentation Tools Acceptance Criteria: Resolve issue. DoD: Merge request Have a document containing where to find information on Java Documentation",0,as a customer i want to be able to use the product with ease so that my application process will be as simple as possible description a merge request with the dev branch must be made and the documentation containing information about automated java documentation tools acceptance criteria resolve issue dod merge request have a document containing where to find information on java documentation,0 1691,6574180292.0,IssuesEvent,2017-09-11 11:51:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,vsphere_guest,affects_2.2 bug_report cloud vmware waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT MacOS X Sierra ##### SUMMARY vpshere_guest requires to provide esxi hostname and datacenter, but within our organisation we don't have rights to create VM on the host, only in Resource Pool ##### STEPS TO REPRODUCE ansible-playbook createvm_new.yml ``` --- - name: Create a VM in resource pool hosts: localhost connection: local gather_facts: False vars_prompt: - name: ""user"" prompt: ""Enter your username to virtualcenter"" private: no - name: ""password"" prompt: ""Enter your password to virtualcenter"" private: yes - name: ""guest"" prompt: ""Enter you guest VM name: "" private: no tasks: - name: create VM vsphere_guest: vcenter_hostname: virtualcenter.example.com validate_certs: no username: '{{ user }}' password: '{{ password }}' guest: '{{ guest }}' state: powered_off vm_extra_config: vcpu.hotadd: yes mem.hotadd: yes notes: This is a test VM vm_disk: disk1: size_gb: 10 type: thick datastore: my_datastore vm_nic: nic1: type: vmxnet3 network: GL-Network - Temp network_type: standard vm_hardware: memory_mb: 1024 num_cpus: 1 osid: centos64Guest scsi: paravirtual resource_pool: ""/Resources/GL - VMware - Team "" esxi: datacenter: my_site hostname: myesxhost.example.com ``` ##### EXPECTED RESULTS I expected that i only provide hardware details and Resource pool to use ##### ACTUAL RESULTS Ansible threw an exception cause of permission denied. If i comment out the esxi part it tells about missing key-pair ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1879, in \n main()\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1867, in main\n state=state\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1414, in create_vm\n taskmor = vsphere_client._proxy.CreateVM_Task(create_vm_request)._returnval\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/resources/VimService_services.py\"", line 1094, in CreateVM_Task\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 545, in Receive\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 464, in Receive\npysphere.ZSI.FaultException: Permission to perform this operation was denied.\n\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",True,"vsphere_guest - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT MacOS X Sierra ##### SUMMARY vpshere_guest requires to provide esxi hostname and datacenter, but within our organisation we don't have rights to create VM on the host, only in Resource Pool ##### STEPS TO REPRODUCE ansible-playbook createvm_new.yml ``` --- - name: Create a VM in resource pool hosts: localhost connection: local gather_facts: False vars_prompt: - name: ""user"" prompt: ""Enter your username to virtualcenter"" private: no - name: ""password"" prompt: ""Enter your password to virtualcenter"" private: yes - name: ""guest"" prompt: ""Enter you guest VM name: "" private: no tasks: - name: create VM vsphere_guest: vcenter_hostname: virtualcenter.example.com validate_certs: no username: '{{ user }}' password: '{{ password }}' guest: '{{ guest }}' state: powered_off vm_extra_config: vcpu.hotadd: yes mem.hotadd: yes notes: This is a test VM vm_disk: disk1: size_gb: 10 type: thick datastore: my_datastore vm_nic: nic1: type: vmxnet3 network: GL-Network - Temp network_type: standard vm_hardware: memory_mb: 1024 num_cpus: 1 osid: centos64Guest scsi: paravirtual resource_pool: ""/Resources/GL - VMware - Team "" esxi: datacenter: my_site hostname: myesxhost.example.com ``` ##### EXPECTED RESULTS I expected that i only provide hardware details and Resource pool to use ##### ACTUAL RESULTS Ansible threw an exception cause of permission denied. If i comment out the esxi part it tells about missing key-pair ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Traceback (most recent call last):\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1879, in \n main()\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1867, in main\n state=state\n File \""/var/folders/sz/xr8xnzbd7v38fsm1kmv3tlpm0000gp/T/ansible_kbb69f/ansible_module_vsphere_guest.py\"", line 1414, in create_vm\n taskmor = vsphere_client._proxy.CreateVM_Task(create_vm_request)._returnval\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/resources/VimService_services.py\"", line 1094, in CreateVM_Task\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 545, in Receive\n File \""build/bdist.macosx-10.11-x86_64/egg/pysphere/ZSI/client.py\"", line 464, in Receive\npysphere.ZSI.FaultException: Permission to perform this operation was denied.\n\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false} ``` ",1,vsphere guest issue type bug report component name vsphere guest ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific macos x sierra summary vpshere guest requires to provide esxi hostname and datacenter but within our organisation we don t have rights to create vm on the host only in resource pool steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible playbook createvm new yml name create a vm in resource pool hosts localhost connection local gather facts false vars prompt name user prompt enter your username to virtualcenter private no name password prompt enter your password to virtualcenter private yes name guest prompt enter you guest vm name private no tasks name create vm vsphere guest vcenter hostname virtualcenter example com validate certs no username user password password guest guest state powered off vm extra config vcpu hotadd yes mem hotadd yes notes this is a test vm vm disk size gb type thick datastore my datastore vm nic type network gl network temp network type standard vm hardware memory mb num cpus osid scsi paravirtual resource pool resources gl vmware team esxi datacenter my site hostname myesxhost example com expected results i expected that i only provide hardware details and resource pool to use actual results ansible threw an exception cause of permission denied if i comment out the esxi part it tells about missing key pair an exception occurred during task execution to see the full traceback use vvv the error was fatal failed changed false failed true module stderr traceback most recent call last n file var folders sz t ansible ansible module vsphere guest py line in n main n file var folders sz t ansible ansible module vsphere guest py line in main n state state n file var folders sz t ansible ansible module vsphere guest py line in create vm n taskmor vsphere client proxy createvm task create vm request returnval n file build bdist macosx egg pysphere resources vimservice services py line in createvm task n file build bdist macosx egg pysphere zsi client py line in receive n file build bdist macosx egg pysphere zsi client py line in receive npysphere zsi faultexception permission to perform this operation was denied n n module stdout msg module failure parsed false ,1 563043,16675255453.0,IssuesEvent,2021-06-07 15:26:40,cdr/code-server,https://api.github.com/repos/cdr/code-server,closed,Python file is not running - Code-server: 3.10.1,bug extension high-priority waiting-for-info," ## OS/Web Information - Web Browser: Edge - Local OS: Linux - Ubuntu 20.04 LTS - Remote OS: Ubuntu 20.04 LTS - Remote Architecture: VM - `code-server --version`: 3.10.1 421237f499079cf88d68c02163b70e2b476bbb0d Latest ## Steps to Reproduce 1. Run a python file in terminal ## Expected It should run my python file. ## Actual - The terminal is not opening! - Throwing two errors: - command : `'python.execlnTerminal'` not found - command : `'python.execlnTerminal-icon'` not found ## Logs [backend.log](https://github.com/cdr/code-server/files/6513351/newfile.txt) ## Screenshot ![Screenshot_2021-05-20-09-55-38-390_com microsoft emmx](https://user-images.githubusercontent.com/80871183/118921147-4b342780-b955-11eb-99a4-a32e52b35254.png) ## Notes This issue can be reproduced in VS Code: Yes ",1.0,"Python file is not running - Code-server: 3.10.1 - ## OS/Web Information - Web Browser: Edge - Local OS: Linux - Ubuntu 20.04 LTS - Remote OS: Ubuntu 20.04 LTS - Remote Architecture: VM - `code-server --version`: 3.10.1 421237f499079cf88d68c02163b70e2b476bbb0d Latest ## Steps to Reproduce 1. Run a python file in terminal ## Expected It should run my python file. ## Actual - The terminal is not opening! - Throwing two errors: - command : `'python.execlnTerminal'` not found - command : `'python.execlnTerminal-icon'` not found ## Logs [backend.log](https://github.com/cdr/code-server/files/6513351/newfile.txt) ## Screenshot ![Screenshot_2021-05-20-09-55-38-390_com microsoft emmx](https://user-images.githubusercontent.com/80871183/118921147-4b342780-b955-11eb-99a4-a32e52b35254.png) ## Notes This issue can be reproduced in VS Code: Yes ",0,python file is not running code server hi there 👋 thanks for reporting a bug please search for existing issues before filing as they may contain additional information about the problem and descriptions of workarounds provide as much information as you can so that we can reproduce the issue otherwise we may not be able to help diagnose the problem and may close the issue as unreproducible or incomplete for visual defects please include screenshots to help us understand the issue os web information web browser edge local os linux ubuntu lts remote os ubuntu lts remote architecture vm code server version latest steps to reproduce run a python file in terminal expected it should run my python file actual the terminal is not opening throwing two errors command python execlnterminal not found command python execlnterminal icon not found logs first run code server with at least debug logging or trace to be really thorough by setting the log flag or the log level environment variable vvv and verbose are aliases for log trace for example code server log debug once this is done replicate the issue you re having then collect logging information from the following places the most recent files from local share code server coder logs the browser console the browser network tab additionally collecting core dumps you may need to enable them first if code server crashes can be helpful screenshot notes if you can reproduce the issue on vanilla vs code please file the issue at the vs code repository instead this issue can be reproduced in vs code yes ,0 369528,25855309296.0,IssuesEvent,2022-12-13 13:20:48,BiBiServ/bibigrid,https://api.github.com/repos/BiBiServ/bibigrid,closed,Documentation of light rest 4j API Access,documentation,"Creating a simple guide, ""How to use and start the API Server ..."" Postman examples should be updated ",1.0,"Documentation of light rest 4j API Access - Creating a simple guide, ""How to use and start the API Server ..."" Postman examples should be updated ",0,documentation of light rest api access creating a simple guide how to use and start the api server postman examples should be updated ,0 1058,4875072483.0,IssuesEvent,2016-11-16 08:16:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git update fails every other time,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION The default which shipped with Fedora release 24 (Twenty Four). ##### OS / ENVIRONMENT N/A ##### SUMMARY Git clone fails every other time, with this error message ``` TASK [clone icons] ************************************************************* fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/home/l33tname/dotfiles/setup.retry ``` ##### STEPS TO REPRODUCE ``` - hosts: local tasks: - name: clone icons git: repo=https://github.com/jcubic/Clarity.git force=yes dest=/home/l33tname/.icons/Clarity - name: config icons command: ./configure chdir=/home/l33tname/.icons/Clarity ``` ##### EXPECTED RESULTS I expect that it works everytime not only every second time. ##### ACTUAL RESULTS ``` TASK [clone icons] ************************************************************* task path: /home/l33tname/dotfiles/git_wtf.yaml:4 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/source_control/git.py <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" && echo ansible-tmp-1479049433.89-122334128883345=""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /tmp/tmpPx4qzT TO /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py <127.0.0.1> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'chmod u+x /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/ /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py && sleep 0'""'""'' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C -tt 127.0.0.1 '/bin/sh -c '""'""'/usr/bin/python /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py; rm -rf ""/home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/"" > /dev/null 2>&1 && sleep 0'""'""'' fatal: [127.0.0.1]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""module_stderr"": ""OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016\r\ndebug1: Reading configuration data /home/l33tname/.ssh/config\r\ndebug1: /home/l33tname/.ssh/config line 1: Applying options for 127.0.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 21589\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",True,"git update fails every other time - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION The default which shipped with Fedora release 24 (Twenty Four). ##### OS / ENVIRONMENT N/A ##### SUMMARY Git clone fails every other time, with this error message ``` TASK [clone icons] ************************************************************* fatal: [127.0.0.1]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_x7POFb/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE""} to retry, use: --limit @/home/l33tname/dotfiles/setup.retry ``` ##### STEPS TO REPRODUCE ``` - hosts: local tasks: - name: clone icons git: repo=https://github.com/jcubic/Clarity.git force=yes dest=/home/l33tname/.icons/Clarity - name: config icons command: ./configure chdir=/home/l33tname/.icons/Clarity ``` ##### EXPECTED RESULTS I expect that it works everytime not only every second time. ##### ACTUAL RESULTS ``` TASK [clone icons] ************************************************************* task path: /home/l33tname/dotfiles/git_wtf.yaml:4 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/source_control/git.py <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" && echo ansible-tmp-1479049433.89-122334128883345=""` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `"" ) && sleep 0'""'""'' <127.0.0.1> PUT /tmp/tmpPx4qzT TO /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py <127.0.0.1> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C '[127.0.0.1]' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '""'""'chmod u+x /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/ /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py && sleep 0'""'""'' <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C -tt 127.0.0.1 '/bin/sh -c '""'""'/usr/bin/python /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py; rm -rf ""/home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/"" > /dev/null 2>&1 && sleep 0'""'""'' fatal: [127.0.0.1]: FAILED! => { ""changed"": false, ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""module_stderr"": ""OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016\r\ndebug1: Reading configuration data /home/l33tname/.ssh/config\r\ndebug1: /home/l33tname/.ssh/config line 1: Applying options for 127.0.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 21589\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 1040, in \r\n main()\r\n File \""/tmp/ansible_MhEEpB/ansible_module_git.py\"", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",1,git update fails every other time issue type bug report component name git ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables the default which shipped with fedora release twenty four os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary git clone fails every other time with this error message task fatal failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module git py line in r n main r n file tmp ansible ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure to retry use limit home dotfiles setup retry steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts local tasks name clone icons git repo force yes dest home icons clarity name config icons command configure chdir home icons clarity expected results i expect that it works everytime not only every second time actual results task task path home dotfiles git wtf yaml using module file usr lib site packages ansible modules core source control git py establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp git py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c bin sh c chmod u x home ansible tmp ansible tmp home ansible tmp ansible tmp git py sleep establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c tt bin sh c usr bin python home ansible tmp ansible tmp git py rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name git module stderr openssh openssl fips may r reading configuration data home ssh config r home ssh config line applying options for r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to closed r n module stdout traceback most recent call last r n file tmp ansible mheepb ansible module git py line in r n main r n file tmp ansible mheepb ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure ,1 1447,6287525610.0,IssuesEvent,2017-07-19 15:07:42,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,os_server module not updating metadata on a running instance,affects_2.2 bug_report cloud openstack waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_server module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (stable-2.2 c5d4134f37) last updated 2016/10/27 16:10:22 (GMT +100) lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/27 16:10:37 (GMT +100) lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/27 16:10:37 (GMT +100) config file = /home/luisg/provision/boxes/test/openstack/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running ansible on Debian. Targeting OpenStack instances with CentOS 7.2 ##### SUMMARY Module `os_server` does not add meta key/value pairs (using option `meta`) to a running OS instance. Using the same options while creating the OS instance in the first place does add the meta key/value pairs. ##### STEPS TO REPRODUCE This is an example playbook (it assumes your openstack env is correctly set and the same playbook has been run before without the `meta` option): ``` --- - hosts: localhost tasks: - name: Create instance os_server: name: instance image: some-image state: present meta: groups: 'some-group' register: instance - debug: var=instance ``` ##### EXPECTED RESULTS The above playbook should return the metadata argument within the debug output (abbreviated here): ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": { ""groups"": ""some-group"" }, }, } } ``` ##### ACTUAL RESULTS In contrast, the following is obtained, where metadata is returned empty: ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": {}, }, } } ``` Note the task notifies it changed, but nothing happens to the metadata, nor to any other result provided by ansible-playbook's output (just did a diff of two consecutive runs). ",True,"os_server module not updating metadata on a running instance - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_server module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 (stable-2.2 c5d4134f37) last updated 2016/10/27 16:10:22 (GMT +100) lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/27 16:10:37 (GMT +100) lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/27 16:10:37 (GMT +100) config file = /home/luisg/provision/boxes/test/openstack/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running ansible on Debian. Targeting OpenStack instances with CentOS 7.2 ##### SUMMARY Module `os_server` does not add meta key/value pairs (using option `meta`) to a running OS instance. Using the same options while creating the OS instance in the first place does add the meta key/value pairs. ##### STEPS TO REPRODUCE This is an example playbook (it assumes your openstack env is correctly set and the same playbook has been run before without the `meta` option): ``` --- - hosts: localhost tasks: - name: Create instance os_server: name: instance image: some-image state: present meta: groups: 'some-group' register: instance - debug: var=instance ``` ##### EXPECTED RESULTS The above playbook should return the metadata argument within the debug output (abbreviated here): ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": { ""groups"": ""some-group"" }, }, } } ``` ##### ACTUAL RESULTS In contrast, the following is obtained, where metadata is returned empty: ``` TASK [debug] ******************************************************************* ok: [localhost] => { ""instance"": { ""changed"": true, ""openstack"": { ""metadata"": {}, }, } } ``` Note the task notifies it changed, but nothing happens to the metadata, nor to any other result provided by ansible-playbook's output (just did a diff of two consecutive runs). ",1,os server module not updating metadata on a running instance issue type bug report component name os server module ansible version ansible stable last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home luisg provision boxes test openstack ansible cfg configured module search path default w o overrides configuration os environment running ansible on debian targeting openstack instances with centos summary module os server does not add meta key value pairs using option meta to a running os instance using the same options while creating the os instance in the first place does add the meta key value pairs steps to reproduce this is an example playbook it assumes your openstack env is correctly set and the same playbook has been run before without the meta option hosts localhost tasks name create instance os server name instance image some image state present meta groups some group register instance debug var instance expected results the above playbook should return the metadata argument within the debug output abbreviated here task ok instance changed true openstack metadata groups some group actual results in contrast the following is obtained where metadata is returned empty task ok instance changed true openstack metadata note the task notifies it changed but nothing happens to the metadata nor to any other result provided by ansible playbook s output just did a diff of two consecutive runs ,1 1132,4998447713.0,IssuesEvent,2016-12-09 19:53:06,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,--diff doesn't show all differences with Template module,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Template ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3bac945147) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Using `--diff` when running template module doesn't show all the changes when both the template file and mode has changed. ##### STEPS TO REPRODUCE Task: ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0755 become: yes become_method: sudo ``` myconf.conf ``` { ""obj"": { ""type"": ""foo"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` Changed task w/ updated `mode` and updated `myconf.conf` ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0775 become: yes become_method: sudo ``` ``` { ""obj"": { ""type"": ""foo1"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` ##### EXPECTED RESULTS ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": {""mode"": ""0775"", ""path"": ""/abc/myconf.conf""}, ""before"": {""mode"": ""0755"", ""path"": ""/abc/myconf.conf""},{""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n}, ""gid"": 0, ""group"": ""root"", ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""delimiter"": null, ""dest"": ""/abc/myconf.conf"", ""diff_peek"": null, ""directory_mode"": null, ""follow"": true, ""force"": false, ""group"": null, ""mode"": 509, ""original_basename"": ""myconf.conf.j2"", ""owner"": null, ""path"": ""/abc/myconf.conf"", ""recurse"": false, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": null, ""validate"": null}}, ""mode"": ""0755"", ""owner"": ""root"", ""path"": ""/abc/myconf.conf"", ""size"": 138, ""state"": ""file"", ""uid"": 0} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ##### ACTUAL RESULTS Ansible playbook was run with `--diff` and `--check` flags. The behavior is the same whether `--check` flag was used or not. ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""before_header"": ""/abc/myconf.conf""}, ""invocation"": {""module_args"": {""dest"": ""/abc/myconf.conf"", ""mode"": 509, ""src"": ""myconf.conf.j2""}, ""module_name"": ""template""}} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, ``` NOTE: When only `mode` was updated with no changes to the `myconf.conf` file, the output is as expected as follows: ``` --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ",True,"--diff doesn't show all differences with Template module - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Template ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 3bac945147) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT N/A ##### SUMMARY Using `--diff` when running template module doesn't show all the changes when both the template file and mode has changed. ##### STEPS TO REPRODUCE Task: ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0755 become: yes become_method: sudo ``` myconf.conf ``` { ""obj"": { ""type"": ""foo"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` Changed task w/ updated `mode` and updated `myconf.conf` ``` - name: ensures /abc/myconf.conf exists template: src: ""my_template"" dest: ""/abc/myconf.conf"" mode: 0775 become: yes become_method: sudo ``` ``` { ""obj"": { ""type"": ""foo1"", ""pol"": { ""a"": ""{{ ansible_processor_vcpus }}|a_pol"", ""b"": ""b_pol, ""c"": ""c_pol"", ""d"": ""d_pol"" } } } ``` ##### EXPECTED RESULTS ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": {""mode"": ""0775"", ""path"": ""/abc/myconf.conf""}, ""before"": {""mode"": ""0755"", ""path"": ""/abc/myconf.conf""},{""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n}, ""gid"": 0, ""group"": ""root"", ""invocation"": {""module_args"": {""backup"": null, ""content"": null, ""delimiter"": null, ""dest"": ""/abc/myconf.conf"", ""diff_peek"": null, ""directory_mode"": null, ""follow"": true, ""force"": false, ""group"": null, ""mode"": 509, ""original_basename"": ""myconf.conf.j2"", ""owner"": null, ""path"": ""/abc/myconf.conf"", ""recurse"": false, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""src"": null, ""state"": null, ""validate"": null}}, ""mode"": ""0755"", ""owner"": ""root"", ""path"": ""/abc/myconf.conf"", ""size"": 138, ""state"": ""file"", ""uid"": 0} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ##### ACTUAL RESULTS Ansible playbook was run with `--diff` and `--check` flags. The behavior is the same whether `--check` flag was used or not. ``` changed: [xx.xx.xx.xxx] => {""changed"": true, ""diff"": {""after"": ""{\n \""obj\"": {\n \""type\"": \""foo1\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""after_header"": ""dynamically generated"", ""before"": ""{\n \""obj\"": {\n \""type\"": \""foo\"",\n \""pol\"": {\n \""a\"": \""2|a_pol\"",\n \""b\"": \""b_pol,\n \""c\"": \""c_pol\"",\n \""d\"": \""d_pol\""\n }\n }\n}\n\n"", ""before_header"": ""/abc/myconf.conf""}, ""invocation"": {""module_args"": {""dest"": ""/abc/myconf.conf"", ""mode"": 509, ""src"": ""myconf.conf.j2""}, ""module_name"": ""template""}} --- before: /abc/myconf.conf +++ after: dynamically generated @@ -1,6 +1,6 @@ { ""obj"": { - ""type"": ""foo"", + ""type"": ""foo1"", ""pol"": { ""a"": ""2|a_pol"", ""b"": ""b_pol, ``` NOTE: When only `mode` was updated with no changes to the `myconf.conf` file, the output is as expected as follows: ``` --- before +++ after @@ -1,4 +1,4 @@ { - ""mode"": ""0755"", + ""mode"": ""0775"", ""path"": ""/abc/myconf.conf"" } ``` ",1, diff doesn t show all differences with template module issue type bug report component name template ansible version ansible devel configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary using diff when running template module doesn t show all the changes when both the template file and mode has changed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used task name ensures abc myconf conf exists template src my template dest abc myconf conf mode become yes become method sudo myconf conf obj type foo pol a ansible processor vcpus a pol b b pol c c pol d d pol changed task w updated mode and updated myconf conf name ensures abc myconf conf exists template src my template dest abc myconf conf mode become yes become method sudo obj type pol a ansible processor vcpus a pol b b pol c c pol d d pol expected results changed changed true diff after mode path abc myconf conf before mode path abc myconf conf after n obj n type n pol n a a pol n b b pol n c c pol n d d pol n n n n n after header dynamically generated before n obj n type foo n pol n a a pol n b b pol n c c pol n d d pol n n gid group root invocation module args backup null content null delimiter null dest abc myconf conf diff peek null directory mode null follow true force false group null mode original basename myconf conf owner null path abc myconf conf recurse false regexp null remote src null selevel null serole null setype null seuser null src null state null validate null mode owner root path abc myconf conf size state file uid before abc myconf conf after dynamically generated obj type foo type pol a a pol b b pol before after mode mode path abc myconf conf actual results ansible playbook was run with diff and check flags the behavior is the same whether check flag was used or not changed changed true diff after n obj n type n pol n a a pol n b b pol n c c pol n d d pol n n n n n after header dynamically generated before n obj n type foo n pol n a a pol n b b pol n c c pol n d d pol n n n n n before header abc myconf conf invocation module args dest abc myconf conf mode src myconf conf module name template before abc myconf conf after dynamically generated obj type foo type pol a a pol b b pol note when only mode was updated with no changes to the myconf conf file the output is as expected as follows before after mode mode path abc myconf conf ,1 1751,6574956800.0,IssuesEvent,2017-09-11 14:36:33,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,git module always fails on update if local has modification,affects_2.2 bug_report waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` 2.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT debian:8.0 jessie ##### SUMMARY If local git repository has modification, an update attempt of always fails with Local modifications exist, even if force=yes was given. ##### STEPS TO REPRODUCE ``` tasks: - name: update project dependency git: dest={{item.location|quote}} repo={{item.scm_url|quote}} version={{item.scm_revision|quote}} force=yes refspec={{item.scm_refspec}} accept_hostkey=yes with_items: ""{{ deps }}"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` failed: [127.0.0.1] => { ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""item"": { ""location"": ""/opt/tiger/neihan/conf"", ""name"": ""neihan/conf"", ""scm_refspec"": ""refs/heads/master"", ""scm_revision"": ""master"", ""scm_url"": ""ssh://*********/neihan/conf"" }, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 1023, in \r\n main()\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 974, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",True,"git module always fails on update if local has modification - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME git ##### ANSIBLE VERSION ``` 2.2.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT debian:8.0 jessie ##### SUMMARY If local git repository has modification, an update attempt of always fails with Local modifications exist, even if force=yes was given. ##### STEPS TO REPRODUCE ``` tasks: - name: update project dependency git: dest={{item.location|quote}} repo={{item.scm_url|quote}} version={{item.scm_revision|quote}} force=yes refspec={{item.scm_refspec}} accept_hostkey=yes with_items: ""{{ deps }}"" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` failed: [127.0.0.1] => { ""failed"": true, ""invocation"": { ""module_name"": ""git"" }, ""item"": { ""location"": ""/opt/tiger/neihan/conf"", ""name"": ""neihan/conf"", ""scm_refspec"": ""refs/heads/master"", ""scm_revision"": ""master"", ""scm_url"": ""ssh://*********/neihan/conf"" }, ""module_stderr"": ""Shared connection to 127.0.0.1 closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 1023, in \r\n main()\r\n File \""/tmp/ansible_U00Fwd/ansible_module_git.py\"", line 974, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n"", ""msg"": ""MODULE FAILURE"" } ``` ",1,git module always fails on update if local has modification issue type bug report component name git ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific debian jessie summary if local git repository has modification an update attempt of always fails with local modifications exist even if force yes was given steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used tasks name update project dependency git dest item location quote repo item scm url quote version item scm revision quote force yes refspec item scm refspec accept hostkey yes with items deps expected results actual results failed failed true invocation module name git item location opt tiger neihan conf name neihan conf scm refspec refs heads master scm revision master scm url ssh neihan conf module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module git py line in r n main r n file tmp ansible ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure ,1 2619,4859360755.0,IssuesEvent,2016-11-13 16:23:49,xh3b4sd/anna,https://api.github.com/repos/xh3b4sd/anna,opened,add config service,feature service,"Configuration must be provided from the outside to the executed process. Provided configuration needs to be available to all services within the service collection. Therefore a config service could be introduced which provides an hierarchical config object. The following example shows calls where certain configuration is fetched. Note that this issue also includes fixing #246. ``` s.Service().Config().Space().Connection().Weight() s.Service().Config().Space().Dimension().Depth() s.Service().Config().Space().Dimension().Count() s.Service().Config().Space().Peer().Position() ```",1.0,"add config service - Configuration must be provided from the outside to the executed process. Provided configuration needs to be available to all services within the service collection. Therefore a config service could be introduced which provides an hierarchical config object. The following example shows calls where certain configuration is fetched. Note that this issue also includes fixing #246. ``` s.Service().Config().Space().Connection().Weight() s.Service().Config().Space().Dimension().Depth() s.Service().Config().Space().Dimension().Count() s.Service().Config().Space().Peer().Position() ```",0,add config service configuration must be provided from the outside to the executed process provided configuration needs to be available to all services within the service collection therefore a config service could be introduced which provides an hierarchical config object the following example shows calls where certain configuration is fetched note that this issue also includes fixing s service config space connection weight s service config space dimension depth s service config space dimension count s service config space peer position ,0 810,4434218600.0,IssuesEvent,2016-08-18 01:13:02,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,get_url not using environment variable no_proxy,bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME get_url ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.2 (Maipo) ##### SUMMARY When setting the environment variable no_proxy, the get_url module doesn't use it. We need to bypass our corporate proxy to get files from our central fileshare. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost vars: proxy_url: ""http://aws-proxy-us-east-1.company.com:8080"" noproxy_url: ""127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com"" url_to_get: ""https://fileshare.company.com/filename.tar"" dest_dir: ""/tmp"" tasks: - debug: msg=""proxy_url is {{ proxy_url }}"" - debug: msg=""noproxy_url is {{ noproxy_url }}"" - debug: msg=""url_to_get is {{ url_to_get }}"" - name: ""Set proxy environment variables"" set_fact: environment_vars: http_proxy: ""{{ proxy_url }}"" https_proxy: ""{{ proxy_url }}"" no_proxy: ""{{ noproxy_url }}"" HTTP_PROXY: ""{{ proxy_url }}"" HTTPS_PROXY: ""{{ proxy_url }}"" NO_PROXY: ""{{ noproxy_url }}"" - name: ""Download something with no env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf"" validate_certs: no force: yes become: no - name: ""Download something with env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf2"" validate_certs: no force: yes environment: ""{{ environment_vars }}"" become: no register: get_url_err ignore_errors: yes - debug: var=get_url_err ``` ##### EXPECTED RESULTS Both should download the file and drop in /tmp. For comparison, I ran this from command line using ""curl"". When I set the http_proxy and https_proxy, it failed with a 404 (blocked by the proxy). When I set no_proxy to our fileshare, it worked (bypassed the proxy). ##### ACTUAL RESULTS First task worked, second one says it timed out. However, when I captured the module output to a variable and dumped it, it was actually a 404. ``` TASK [Download something with env vars set] ************************************ task path: /home/ec2-user/geturltest.yml:34 ESTABLISH LOCAL CONNECTION FOR USER: ec2-user EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" && echo ansible-tmp-1465584747.19-272127080396946=""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" ) && sleep 0' PUT /tmp/tmpC_cHj6 TO /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url EXEC /bin/sh -c 'LANG=en_US.UTF-8 HTTP_PROXY=http://aws-proxy-us-east-1.company.com:8080 LC_MESSAGES=en_US.UTF-8 HTTPS_PROXY=http://aws-proxy-us-east-1.company.com:8080 NO_PROXY=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com http_proxy=http://aws-proxy-us-east-1.company.com:8080 https_proxy=http://aws-proxy-us-east-1.company.com:8080 LC_ALL=en_US.UTF-8 no_proxy=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""invocation"": {""module_args"": {""backup"": false, ""checksum"": """", ""content"": null, ""delimiter"": null, ""dest"": ""/tmp/wtf2"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""force_basic_auth"": false, ""group"": null, ""headers"": null, ""http_agent"": ""ansible-httpget"", ""mode"": null, ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""sha256sum"": """", ""src"": null, ""timeout"": 10, ""tmp_dest"": """", ""url"": ""https://fileshare.company.com/filename.tar"", ""url_password"": null, ""url_username"": null, ""use_proxy"": true, ""validate_certs"": false}, ""module_name"": ""get_url""}, ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""Request failed: "", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": -1, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar""} TASK [debug] ******************************************************************* task path: /home/ec2-user/geturltest.yml:47 ok: [localhost] => { ""get_url_err"": { ""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""HTTP Error 403: Forbidden"", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": 403, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar"" } } ``` ",True,"get_url not using environment variable no_proxy - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME get_url ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Red Hat Enterprise Linux Server release 7.2 (Maipo) ##### SUMMARY When setting the environment variable no_proxy, the get_url module doesn't use it. We need to bypass our corporate proxy to get files from our central fileshare. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost vars: proxy_url: ""http://aws-proxy-us-east-1.company.com:8080"" noproxy_url: ""127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com"" url_to_get: ""https://fileshare.company.com/filename.tar"" dest_dir: ""/tmp"" tasks: - debug: msg=""proxy_url is {{ proxy_url }}"" - debug: msg=""noproxy_url is {{ noproxy_url }}"" - debug: msg=""url_to_get is {{ url_to_get }}"" - name: ""Set proxy environment variables"" set_fact: environment_vars: http_proxy: ""{{ proxy_url }}"" https_proxy: ""{{ proxy_url }}"" no_proxy: ""{{ noproxy_url }}"" HTTP_PROXY: ""{{ proxy_url }}"" HTTPS_PROXY: ""{{ proxy_url }}"" NO_PROXY: ""{{ noproxy_url }}"" - name: ""Download something with no env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf"" validate_certs: no force: yes become: no - name: ""Download something with env vars set"" local_action: module: get_url url: ""{{ url_to_get }}"" dest: ""{{ dest_dir }}/wtf2"" validate_certs: no force: yes environment: ""{{ environment_vars }}"" become: no register: get_url_err ignore_errors: yes - debug: var=get_url_err ``` ##### EXPECTED RESULTS Both should download the file and drop in /tmp. For comparison, I ran this from command line using ""curl"". When I set the http_proxy and https_proxy, it failed with a 404 (blocked by the proxy). When I set no_proxy to our fileshare, it worked (bypassed the proxy). ##### ACTUAL RESULTS First task worked, second one says it timed out. However, when I captured the module output to a variable and dumped it, it was actually a 404. ``` TASK [Download something with env vars set] ************************************ task path: /home/ec2-user/geturltest.yml:34 ESTABLISH LOCAL CONNECTION FOR USER: ec2-user EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" && echo ansible-tmp-1465584747.19-272127080396946=""` echo $HOME/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946 `"" ) && sleep 0' PUT /tmp/tmpC_cHj6 TO /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url EXEC /bin/sh -c 'LANG=en_US.UTF-8 HTTP_PROXY=http://aws-proxy-us-east-1.company.com:8080 LC_MESSAGES=en_US.UTF-8 HTTPS_PROXY=http://aws-proxy-us-east-1.company.com:8080 NO_PROXY=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com http_proxy=http://aws-proxy-us-east-1.company.com:8080 https_proxy=http://aws-proxy-us-east-1.company.com:8080 LC_ALL=en_US.UTF-8 no_proxy=127.0.0.1,localhost,.local,169.254.169.254,.fileshare.company.com /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/get_url; rm -rf ""/home/ec2-user/.ansible/tmp/ansible-tmp-1465584747.19-272127080396946/"" > /dev/null 2>&1 && sleep 0' fatal: [localhost -> localhost]: FAILED! => {""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""invocation"": {""module_args"": {""backup"": false, ""checksum"": """", ""content"": null, ""delimiter"": null, ""dest"": ""/tmp/wtf2"", ""directory_mode"": null, ""follow"": false, ""force"": true, ""force_basic_auth"": false, ""group"": null, ""headers"": null, ""http_agent"": ""ansible-httpget"", ""mode"": null, ""owner"": null, ""regexp"": null, ""remote_src"": null, ""selevel"": null, ""serole"": null, ""setype"": null, ""seuser"": null, ""sha256sum"": """", ""src"": null, ""timeout"": 10, ""tmp_dest"": """", ""url"": ""https://fileshare.company.com/filename.tar"", ""url_password"": null, ""url_username"": null, ""use_proxy"": true, ""validate_certs"": false}, ""module_name"": ""get_url""}, ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""Request failed: "", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": -1, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar""} TASK [debug] ******************************************************************* task path: /home/ec2-user/geturltest.yml:47 ok: [localhost] => { ""get_url_err"": { ""changed"": false, ""dest"": ""/tmp/wtf2"", ""failed"": true, ""gid"": 1000, ""group"": ""ec2-user"", ""mode"": ""0644"", ""msg"": ""Request failed"", ""owner"": ""ec2-user"", ""response"": ""HTTP Error 403: Forbidden"", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 74485760, ""state"": ""file"", ""status_code"": 403, ""uid"": 1000, ""url"": ""https://fileshare.company.com/filename.tar"" } } ``` ",1,get url not using environment variable no proxy issue type bug report component name get url ansible version ansible configuration default os environment red hat enterprise linux server release maipo summary when setting the environment variable no proxy the get url module doesn t use it we need to bypass our corporate proxy to get files from our central fileshare steps to reproduce hosts localhost vars proxy url noproxy url localhost local fileshare company com url to get dest dir tmp tasks debug msg proxy url is proxy url debug msg noproxy url is noproxy url debug msg url to get is url to get name set proxy environment variables set fact environment vars http proxy proxy url https proxy proxy url no proxy noproxy url http proxy proxy url https proxy proxy url no proxy noproxy url name download something with no env vars set local action module get url url url to get dest dest dir wtf validate certs no force yes become no name download something with env vars set local action module get url url url to get dest dest dir validate certs no force yes environment environment vars become no register get url err ignore errors yes debug var get url err expected results both should download the file and drop in tmp for comparison i ran this from command line using curl when i set the http proxy and https proxy it failed with a blocked by the proxy when i set no proxy to our fileshare it worked bypassed the proxy actual results first task worked second one says it timed out however when i captured the module output to a variable and dumped it it was actually a task task path home user geturltest yml establish local connection for user user exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpc to home user ansible tmp ansible tmp get url exec bin sh c lang en us utf http proxy lc messages en us utf https proxy no proxy localhost local fileshare company com http proxy https proxy lc all en us utf no proxy localhost local fileshare company com usr bin python home user ansible tmp ansible tmp get url rm rf home user ansible tmp ansible tmp dev null sleep fatal failed changed false dest tmp failed true gid group user invocation module args backup false checksum content null delimiter null dest tmp directory mode null follow false force true force basic auth false group null headers null http agent ansible httpget mode null owner null regexp null remote src null selevel null serole null setype null seuser null src null timeout tmp dest url url password null url username null use proxy true validate certs false module name get url mode msg request failed owner user response request failed secontext unconfined u object r user tmp t size state file status code uid url task task path home user geturltest yml ok get url err changed false dest tmp failed true gid group user mode msg request failed owner user response http error forbidden secontext unconfined u object r user tmp t size state file status code uid url ,1 939,4652274009.0,IssuesEvent,2016-10-03 13:31:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Fail to check package version,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ```docker_image``` module. ##### ANSIBLE VERSION ```bash $ ansible --version ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default file used. ##### OS / ENVIRONMENT Docker Container (hosted by Debian Jessie). ##### SUMMARY When I want pull a docker image, Ansible reports that docker-py package installed is ```1.10.3``` whereas minimum required is ```1.7.0```. ##### STEPS TO REPRODUCE ```bash $ ansible -m docker_image -a ""name=nginx pull=yes"" foo foo | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.3. Minimum version required is 1.7.0."" } ```",True,"Fail to check package version - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ```docker_image``` module. ##### ANSIBLE VERSION ```bash $ ansible --version ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Default file used. ##### OS / ENVIRONMENT Docker Container (hosted by Debian Jessie). ##### SUMMARY When I want pull a docker image, Ansible reports that docker-py package installed is ```1.10.3``` whereas minimum required is ```1.7.0```. ##### STEPS TO REPRODUCE ```bash $ ansible -m docker_image -a ""name=nginx pull=yes"" foo foo | FAILED! => { ""changed"": false, ""failed"": true, ""msg"": ""Error: docker-py version is 1.10.3. Minimum version required is 1.7.0."" } ```",1,fail to check package version issue type bug report component name docker image module ansible version bash ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default file used os environment docker container hosted by debian jessie summary when i want pull a docker image ansible reports that docker py package installed is whereas minimum required is steps to reproduce bash ansible m docker image a name nginx pull yes foo foo failed changed false failed true msg error docker py version is minimum version required is ,1 1030,4827515754.0,IssuesEvent,2016-11-07 13:52:44,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,temporary AWS Access Keys results in auth failures,affects_1.9 aws bug_report cloud waiting_on_maintainer,"#### Issue Type: Bug Report #### Component Name: ec2 module #### Ansible Version: ansible 1.9.4 #### Ansible Configuration: none #### Environment: Mac OSX 10.11 / Not applicable #### Summary: Using temporary AWS Access Keys results in auth failures #### Steps to reproduce: generate temporary access keys (eg. via STS or SAML provider) attempt to create ec2 resource #### Expected results: ec2 resource is created #### Actual results: AWS AuthFailure exception I originally lodged this against ansible/ansible as https://github.com/ansible/ansible/issues/12959 but I think maybe this should be resolved in the core modules code. There's a comment in the code that suggests, that perhaps the modules just need to be modified to use `connect_to_aws()` ``` def get_ec2_creds(module): ''' for compatibility mode with old modules that don't/can't yet use ec2_connect method ''' ``` ``` failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 3070, in main() File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 1249, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 792, in create_instances vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id File ""/Library/Python/2.7/site-packages/boto/vpc/__init__.py"", line 1153, in get_all_subnets return self.get_list('DescribeSubnets', params, [('item', Subnet)]) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1186, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized AuthFailureAWS was not able to validate the provided access credentialsbcce0f14-b8d4-46e0-a582-17993365b18b from my investigation the issue appears in the module_utils/ec2.py get_ec2_creds( ) which returns ec2_url, boto_params['aws_access_key_id'], boto_params['aws_secret_access_key'], region ``` since the aws_access_key_id in this example will only work with a security_token, the method is effectively broken. I think the function should at least warn when it detects a security token or an access key that starts with ASIA instead of AKIA. ",True,"temporary AWS Access Keys results in auth failures - #### Issue Type: Bug Report #### Component Name: ec2 module #### Ansible Version: ansible 1.9.4 #### Ansible Configuration: none #### Environment: Mac OSX 10.11 / Not applicable #### Summary: Using temporary AWS Access Keys results in auth failures #### Steps to reproduce: generate temporary access keys (eg. via STS or SAML provider) attempt to create ec2 resource #### Expected results: ec2 resource is created #### Actual results: AWS AuthFailure exception I originally lodged this against ansible/ansible as https://github.com/ansible/ansible/issues/12959 but I think maybe this should be resolved in the core modules code. There's a comment in the code that suggests, that perhaps the modules just need to be modified to use `connect_to_aws()` ``` def get_ec2_creds(module): ''' for compatibility mode with old modules that don't/can't yet use ec2_connect method ''' ``` ``` failed: [localhost] => {""failed"": true, ""parsed"": false} Traceback (most recent call last): File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 3070, in main() File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 1249, in main (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc) File ""/Users/secole/.ansible/tmp/ansible-tmp-1446095011.34-234704611276689/ec2"", line 792, in create_instances vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id File ""/Library/Python/2.7/site-packages/boto/vpc/__init__.py"", line 1153, in get_all_subnets return self.get_list('DescribeSubnets', params, [('item', Subnet)]) File ""/Library/Python/2.7/site-packages/boto/connection.py"", line 1186, in get_list raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized AuthFailureAWS was not able to validate the provided access credentialsbcce0f14-b8d4-46e0-a582-17993365b18b from my investigation the issue appears in the module_utils/ec2.py get_ec2_creds( ) which returns ec2_url, boto_params['aws_access_key_id'], boto_params['aws_secret_access_key'], region ``` since the aws_access_key_id in this example will only work with a security_token, the method is effectively broken. I think the function should at least warn when it detects a security token or an access key that starts with ASIA instead of AKIA. ",1,temporary aws access keys results in auth failures issue type bug report component name module ansible version ansible ansible configuration none environment mac osx not applicable summary using temporary aws access keys results in auth failures steps to reproduce generate temporary access keys eg via sts or saml provider attempt to create resource expected results resource is created actual results aws authfailure exception i originally lodged this against ansible ansible as but i think maybe this should be resolved in the core modules code there s a comment in the code that suggests that perhaps the modules just need to be modified to use connect to aws def get creds module for compatibility mode with old modules that don t can t yet use connect method failed failed true parsed false traceback most recent call last file users secole ansible tmp ansible tmp line in main file users secole ansible tmp ansible tmp line in main instance dict array new instance ids changed create instances module vpc file users secole ansible tmp ansible tmp line in create instances vpc id vpc get all subnets subnet ids vpc id file library python site packages boto vpc init py line in get all subnets return self get list describesubnets params file library python site packages boto connection py line in get list raise self responseerror response status response reason body boto exception unauthorized authfailure aws was not able to validate the provided access credentials from my investigation the issue appears in the module utils py get creds which returns url boto params boto params region since the aws access key id in this example will only work with a security token the method is effectively broken i think the function should at least warn when it detects a security token or an access key that starts with asia instead of akia ,1 982,4746549521.0,IssuesEvent,2016-10-21 11:38:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Feature Idea : cloudformation ""describe"" state",affects_2.0 aws cloud feature_idea waiting_on_maintainer,"What do people thing about ""describe"" or ""available"" state for cloudformation module. I'm happy to add it in and send pull req I just want to make sure it will get accepted before I start it :) **Use case:** build stack in separate playbook/ansible run, and then I would like to use it like ```YML - name: Get MyStack tier cloudformation: aws_access_key: ""{{ aws.access_key }}"" aws_secret_key: ""{{ aws.secret_key }}"" stack_name: my-stack-name state: available register: mystack_data - name: use {{ mystack_data.outputs.* }} in some awesome ways ``` I really don't want to use lookup plugins, as it's a bit ugly and limited to specify aws secrets and keys per run. Also I need to run lookup for each stack output.",True,"Feature Idea : cloudformation ""describe"" state - What do people thing about ""describe"" or ""available"" state for cloudformation module. I'm happy to add it in and send pull req I just want to make sure it will get accepted before I start it :) **Use case:** build stack in separate playbook/ansible run, and then I would like to use it like ```YML - name: Get MyStack tier cloudformation: aws_access_key: ""{{ aws.access_key }}"" aws_secret_key: ""{{ aws.secret_key }}"" stack_name: my-stack-name state: available register: mystack_data - name: use {{ mystack_data.outputs.* }} in some awesome ways ``` I really don't want to use lookup plugins, as it's a bit ugly and limited to specify aws secrets and keys per run. Also I need to run lookup for each stack output.",1,feature idea cloudformation describe state what do people thing about describe or available state for cloudformation module i m happy to add it in and send pull req i just want to make sure it will get accepted before i start it use case build stack in separate playbook ansible run and then i would like to use it like yml name get mystack tier cloudformation aws access key aws access key aws secret key aws secret key stack name my stack name state available register mystack data name use mystack data outputs in some awesome ways i really don t want to use lookup plugins as it s a bit ugly and limited to specify aws secrets and keys per run also i need to run lookup for each stack output ,1 111163,17019007726.0,IssuesEvent,2021-07-02 15:53:20,raft-tech/TANF-app,https://api.github.com/repos/raft-tech/TANF-app,closed,AU-02: Auditable Events,security,"**Note** This might be inherited from cloud.gov. If so, this wouldn't need to be documented in github. AC: - [ ] Control implementation statement has been reviewed by Raft for technical accuracy - [ ] Control implementation statement has passed QASP review DoD: - [ ] Control implementation statement has been documented in GitHub **Control Description:** The organization: a. Determines that the information system is capable of auditing the following events: [i. The following events must be identified within server audit logs: • Server startup and shutdown; • Loading and unloading of services; • Installation and removal of software; • System alerts and error messages; • User logon and logoff; • System administration activities; • Accesses to sensitive information, files, and systems • Account creation, modification, or deletion; • Modifications of privileges and access controls; and, • Additional security-related events, as required by the System Owner (SO) or to support the nature of the supported business and applications. ii. The following events must be identified within application and database audit logs: • Modifications to the application; • Application alerts and error messages; • User logon and logoff; • System administration activities; • Accesses to information and files • Account creation, modification, or deletion; and, • Modifications of privileges and access controls. iii. The following events must be identified within network device (e.g., router, firewall, switch, wireless access point) audit logs: • Device startup and shutdown; • Administrator logon and logoff; • Configuration changes; • Account creation, modification, or deletion; • Modifications of privileges and access controls; and, • System alerts and error messages.]; b. Coordinates the security audit function with other organizational entities requiring audit-related information to enhance mutual support and to help guide the selection of auditable events; c. Provides a rationale for why the auditable events are deemed to be adequate to support after-the-fact investigations of security incidents; and d. Determines that the following events are to be audited within the information system: [Unsuccessful log-on attempts that result in a locked account/node; Configuration changes; Application alerts and error messages; System administration activities; Modification of privileges and access; and Account creation, modification, or deletion]. For CSP Only AU-2 (a) [Successful and unsuccessful account logon events, account management events, object access, policy change, privilege functions, process tracking, and system events For Web applications: all administrator activity, authentication checks, authorization checks, data deletions, data access, data changes, and permission changes] AU-2 (d) [organization-defined subset of the auditable events defined in AU-2 a to be audited continually for each identified event] AU-2 Additional FedRAMP Requirements and Guidance: Requirement: Coordination between service provider and consumer shall be documented and accepted by the JAB/AO.",True,"AU-02: Auditable Events - **Note** This might be inherited from cloud.gov. If so, this wouldn't need to be documented in github. AC: - [ ] Control implementation statement has been reviewed by Raft for technical accuracy - [ ] Control implementation statement has passed QASP review DoD: - [ ] Control implementation statement has been documented in GitHub **Control Description:** The organization: a. Determines that the information system is capable of auditing the following events: [i. The following events must be identified within server audit logs: • Server startup and shutdown; • Loading and unloading of services; • Installation and removal of software; • System alerts and error messages; • User logon and logoff; • System administration activities; • Accesses to sensitive information, files, and systems • Account creation, modification, or deletion; • Modifications of privileges and access controls; and, • Additional security-related events, as required by the System Owner (SO) or to support the nature of the supported business and applications. ii. The following events must be identified within application and database audit logs: • Modifications to the application; • Application alerts and error messages; • User logon and logoff; • System administration activities; • Accesses to information and files • Account creation, modification, or deletion; and, • Modifications of privileges and access controls. iii. The following events must be identified within network device (e.g., router, firewall, switch, wireless access point) audit logs: • Device startup and shutdown; • Administrator logon and logoff; • Configuration changes; • Account creation, modification, or deletion; • Modifications of privileges and access controls; and, • System alerts and error messages.]; b. Coordinates the security audit function with other organizational entities requiring audit-related information to enhance mutual support and to help guide the selection of auditable events; c. Provides a rationale for why the auditable events are deemed to be adequate to support after-the-fact investigations of security incidents; and d. Determines that the following events are to be audited within the information system: [Unsuccessful log-on attempts that result in a locked account/node; Configuration changes; Application alerts and error messages; System administration activities; Modification of privileges and access; and Account creation, modification, or deletion]. For CSP Only AU-2 (a) [Successful and unsuccessful account logon events, account management events, object access, policy change, privilege functions, process tracking, and system events For Web applications: all administrator activity, authentication checks, authorization checks, data deletions, data access, data changes, and permission changes] AU-2 (d) [organization-defined subset of the auditable events defined in AU-2 a to be audited continually for each identified event] AU-2 Additional FedRAMP Requirements and Guidance: Requirement: Coordination between service provider and consumer shall be documented and accepted by the JAB/AO.",0,au auditable events note this might be inherited from cloud gov if so this wouldn t need to be documented in github ac control implementation statement has been reviewed by raft for technical accuracy control implementation statement has passed qasp review dod control implementation statement has been documented in github control description the organization a determines that the information system is capable of auditing the following events i the following events must be identified within server audit logs • server startup and shutdown • loading and unloading of services • installation and removal of software • system alerts and error messages • user logon and logoff • system administration activities • accesses to sensitive information files and systems • account creation modification or deletion • modifications of privileges and access controls and • additional security related events as required by the system owner so or to support the nature of the supported business and applications ii the following events must be identified within application and database audit logs • modifications to the application • application alerts and error messages • user logon and logoff • system administration activities • accesses to information and files • account creation modification or deletion and • modifications of privileges and access controls iii the following events must be identified within network device e g router firewall switch wireless access point audit logs • device startup and shutdown • administrator logon and logoff • configuration changes • account creation modification or deletion • modifications of privileges and access controls and • system alerts and error messages b coordinates the security audit function with other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of auditable events c provides a rationale for why the auditable events are deemed to be adequate to support after the fact investigations of security incidents and d determines that the following events are to be audited within the information system for csp only au a au d au additional fedramp requirements and guidance requirement coordination between service provider and consumer shall be documented and accepted by the jab ao ,0 1755,6574971624.0,IssuesEvent,2017-09-11 14:39:16,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container - missing cap_drop,affects_2.1 cloud docker feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_container module ##### ANSIBLE VERSION ansible 2.1.2.0 ##### SUMMARY docker_container module does not support removing capabilities (i.e. docker --cap-drop option). cap_drop and cap_add were added to the docker module in ansible 2.0 but that module is marked as deprecated now. ",True,"docker_container - missing cap_drop - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME docker_container module ##### ANSIBLE VERSION ansible 2.1.2.0 ##### SUMMARY docker_container module does not support removing capabilities (i.e. docker --cap-drop option). cap_drop and cap_add were added to the docker module in ansible 2.0 but that module is marked as deprecated now. ",1,docker container missing cap drop issue type feature idea component name docker container module ansible version ansible summary docker container module does not support removing capabilities i e docker cap drop option cap drop and cap add were added to the docker module in ansible but that module is marked as deprecated now ,1 1201,5133079610.0,IssuesEvent,2017-01-11 01:42:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_elb_lb is not always idempotent when updating healthchecks,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ##### SUMMARY ELB healthcheck changes are sometimes ignored. ##### STEPS TO REPRODUCE Create elb with tcp healthcheck on port 80 with the module. Run the module again, except using a different port. ##### EXPECTED RESULTS The healthcheck should be updated with the new target host:port ##### ACTUAL RESULTS Invocation of the module ``` ""invocation"": { ""module_args"": { ""access_logs"": null, ""aws_access_key"": null, ""aws_secret_key"": null, ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": true, ""ec2_url"": null, ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""ping_port"": ""45784"", ""ping_protocol"": ""tcp"", ""response_timeout"": 2, ""unhealthy_threshold"": 10 }, ""idle_timeout"": null, ""instance_ids"": null, ""listeners"": [ { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 80, ""protocol"": ""http"" }, { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 443, ""protocol"": ""https"", ""ssl_certificate_id"": ""cert_id"" }, { ""instance_port"": ""60047"", ""instance_protocol"": ""tcp"", ""load_balancer_port"": 8000, ""protocol"": ""ssl"", ""ssl_certificate_id"": ""cert_id"" } ], ""name"": ""elbName"", ""profile"": null, ""purge_instance_ids"": false, ""purge_listeners"": true, ""purge_subnets"": false, ""purge_zones"": false, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": null, ""security_group_names"": [ ""sg1"", ""sg2"" ], ""security_token"": null, ""state"": ""present"", ""stickiness"": null, ""subnets"": [ ""subnet-1234a"", ""subnet-1234b"", ""subnet-1234c"" ], ""tags"": null, ""validate_certs"": true, ""wait"": false, ""wait_timeout"": 60, ""zones"": null }, ""module_name"": ""ec2_elb_lb"" }, ""item"": { ""service1"": { ""host_port"": ""50765"", ""task"": { ""name"": ""taskName"" } }, ""service2"": { ""host_port"": ""60047"" }, ""service3"": { ""host_port"": ""45784"", ""ssl_certificate_id"": ""cert_id"", ""task"": { ""name"": ""taskName"" } }, ""name"": { ""suffix"": ""lend"" }, ""service"": { ""name"": ""serviceName"" } } ``` output from module ``` ""elb"": { ""app_cookie_policy"": null, ""backends"": [ ], ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": ""yes"", ""dns_name"": ""dns_name"", ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""target"": ""TCP:52359"", ""timeout"": 2, ""unhealthy_threshold"": 10 }, ""hosted_zone_id"": ""hostedZoneId"", ""hosted_zone_name"": null, ""idle_timeout"": 60, ""in_service_count"": 1, ""instance_health"": [ { ""instance_id"": ""i-250534bf"", ""reason_code"": ""N/A"", ""state"": ""InService"" }, { ""instance_id"": ""i-8e68ebcb"", ""reason_code"": ""Instance"", ""state"": ""OutOfService"" } ], ""instances"": [ ""i-250534bf"" ], ""lb_cookie_policy"": null, ""listeners"": [ [ 8000, 60047, ""SSL"", ""TCP"", ""cert_id"" ], [ 80, 45784, ""HTTP"", ""HTTP"" ], [ 443, 45784, ""HTTPS"", ""HTTP"", ""cert_id"" ] ], ""name"": ""elbName"", ""out_of_service_count"": 1, ""proxy_policy"": null, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": [ ""sg1"", ""sg2"" ], ""status"": ""ok"", ""subnets"": [ ""subnet-1"", ""subnet-2"", ""subnet-3"" ], ""tags"": null, ""unknown_instance_state_count"": 0, ""zones"": [ ""us-east-1a"", ""us-east-1b"", ""us-east-1c"" ] } ``` Note the `ping_port` from the invocation does not match what is in the `target:port` from the healthcheck property of the module output. Seems to be ignored in this run. ",True,"ec2_elb_lb is not always idempotent when updating healthchecks - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_elb_lb ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ##### SUMMARY ELB healthcheck changes are sometimes ignored. ##### STEPS TO REPRODUCE Create elb with tcp healthcheck on port 80 with the module. Run the module again, except using a different port. ##### EXPECTED RESULTS The healthcheck should be updated with the new target host:port ##### ACTUAL RESULTS Invocation of the module ``` ""invocation"": { ""module_args"": { ""access_logs"": null, ""aws_access_key"": null, ""aws_secret_key"": null, ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": true, ""ec2_url"": null, ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""ping_port"": ""45784"", ""ping_protocol"": ""tcp"", ""response_timeout"": 2, ""unhealthy_threshold"": 10 }, ""idle_timeout"": null, ""instance_ids"": null, ""listeners"": [ { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 80, ""protocol"": ""http"" }, { ""instance_port"": ""45784"", ""instance_protocol"": ""http"", ""load_balancer_port"": 443, ""protocol"": ""https"", ""ssl_certificate_id"": ""cert_id"" }, { ""instance_port"": ""60047"", ""instance_protocol"": ""tcp"", ""load_balancer_port"": 8000, ""protocol"": ""ssl"", ""ssl_certificate_id"": ""cert_id"" } ], ""name"": ""elbName"", ""profile"": null, ""purge_instance_ids"": false, ""purge_listeners"": true, ""purge_subnets"": false, ""purge_zones"": false, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": null, ""security_group_names"": [ ""sg1"", ""sg2"" ], ""security_token"": null, ""state"": ""present"", ""stickiness"": null, ""subnets"": [ ""subnet-1234a"", ""subnet-1234b"", ""subnet-1234c"" ], ""tags"": null, ""validate_certs"": true, ""wait"": false, ""wait_timeout"": 60, ""zones"": null }, ""module_name"": ""ec2_elb_lb"" }, ""item"": { ""service1"": { ""host_port"": ""50765"", ""task"": { ""name"": ""taskName"" } }, ""service2"": { ""host_port"": ""60047"" }, ""service3"": { ""host_port"": ""45784"", ""ssl_certificate_id"": ""cert_id"", ""task"": { ""name"": ""taskName"" } }, ""name"": { ""suffix"": ""lend"" }, ""service"": { ""name"": ""serviceName"" } } ``` output from module ``` ""elb"": { ""app_cookie_policy"": null, ""backends"": [ ], ""connection_draining_timeout"": 120, ""cross_az_load_balancing"": ""yes"", ""dns_name"": ""dns_name"", ""health_check"": { ""healthy_threshold"": 2, ""interval"": 5, ""target"": ""TCP:52359"", ""timeout"": 2, ""unhealthy_threshold"": 10 }, ""hosted_zone_id"": ""hostedZoneId"", ""hosted_zone_name"": null, ""idle_timeout"": 60, ""in_service_count"": 1, ""instance_health"": [ { ""instance_id"": ""i-250534bf"", ""reason_code"": ""N/A"", ""state"": ""InService"" }, { ""instance_id"": ""i-8e68ebcb"", ""reason_code"": ""Instance"", ""state"": ""OutOfService"" } ], ""instances"": [ ""i-250534bf"" ], ""lb_cookie_policy"": null, ""listeners"": [ [ 8000, 60047, ""SSL"", ""TCP"", ""cert_id"" ], [ 80, 45784, ""HTTP"", ""HTTP"" ], [ 443, 45784, ""HTTPS"", ""HTTP"", ""cert_id"" ] ], ""name"": ""elbName"", ""out_of_service_count"": 1, ""proxy_policy"": null, ""region"": ""us-east-1"", ""scheme"": ""internal"", ""security_group_ids"": [ ""sg1"", ""sg2"" ], ""status"": ""ok"", ""subnets"": [ ""subnet-1"", ""subnet-2"", ""subnet-3"" ], ""tags"": null, ""unknown_instance_state_count"": 0, ""zones"": [ ""us-east-1a"", ""us-east-1b"", ""us-east-1c"" ] } ``` Note the `ping_port` from the invocation does not match what is in the `target:port` from the healthcheck property of the module output. Seems to be ignored in this run. ",1, elb lb is not always idempotent when updating healthchecks issue type bug report component name elb lb ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment n a aws summary elb healthcheck changes are sometimes ignored steps to reproduce create elb with tcp healthcheck on port with the module run the module again except using a different port expected results the healthcheck should be updated with the new target host port actual results invocation of the module invocation module args access logs null aws access key null aws secret key null connection draining timeout cross az load balancing true url null health check healthy threshold interval ping port ping protocol tcp response timeout unhealthy threshold idle timeout null instance ids null listeners instance port instance protocol http load balancer port protocol http instance port instance protocol http load balancer port protocol https ssl certificate id cert id instance port instance protocol tcp load balancer port protocol ssl ssl certificate id cert id name elbname profile null purge instance ids false purge listeners true purge subnets false purge zones false region us east scheme internal security group ids null security group names security token null state present stickiness null subnets subnet subnet subnet tags null validate certs true wait false wait timeout zones null module name elb lb item host port task name taskname host port host port ssl certificate id cert id task name taskname name suffix lend service name servicename output from module elb app cookie policy null backends connection draining timeout cross az load balancing yes dns name dns name health check healthy threshold interval target tcp timeout unhealthy threshold hosted zone id hostedzoneid hosted zone name null idle timeout in service count instance health instance id i reason code n a state inservice instance id i reason code instance state outofservice instances i lb cookie policy null listeners ssl tcp cert id http http https http cert id name elbname out of service count proxy policy null region us east scheme internal security group ids status ok subnets subnet subnet subnet tags null unknown instance state count zones us east us east us east note the ping port from the invocation does not match what is in the target port from the healthcheck property of the module output seems to be ignored in this run ,1 1333,5718505743.0,IssuesEvent,2017-04-19 19:42:26,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Incorrect handling of quotes in lineinfile module,affects_1.9 bug_report waiting_on_maintainer,"### Issue Type Bug Report ### Component Name lineinfile module ### Ansible Version 1.9.3 ### Summary I am using Ansible 1.9.3. I have following content in my `test.yml` file: ``` yaml --- - hosts: all connection: local tasks: - lineinfile: dest: ./example.txt create: yes line: ""something '\""content\""' something else"" ``` And I run it as follows: `ansible-playbook -i localhost, test.yml` ### Expected result `example.txt` contains `something '""content""' something else`. ### Actual result `example.txt` contains `something content something else`. Without any quotes. ### Investigation It seems the problem lies in the `lineinfile` module adding quotes and then executing `line = module.safe_eval(line)`. Relevant [code](https://github.com/ansible/ansible-modules-core/blob/cf88f2786822ab5f4a1cd711761a40df49bd93f0/files/lineinfile.py). After adding quotes line looks like `'something '""content""' something else'` and when passed to `module.safe_eval()` the Pythons's implicit string concatenation is applied and it looses all quotes. --- **Haven't checked if problem exists in development version, it seems module was significantly rewritten.** ",True,"Incorrect handling of quotes in lineinfile module - ### Issue Type Bug Report ### Component Name lineinfile module ### Ansible Version 1.9.3 ### Summary I am using Ansible 1.9.3. I have following content in my `test.yml` file: ``` yaml --- - hosts: all connection: local tasks: - lineinfile: dest: ./example.txt create: yes line: ""something '\""content\""' something else"" ``` And I run it as follows: `ansible-playbook -i localhost, test.yml` ### Expected result `example.txt` contains `something '""content""' something else`. ### Actual result `example.txt` contains `something content something else`. Without any quotes. ### Investigation It seems the problem lies in the `lineinfile` module adding quotes and then executing `line = module.safe_eval(line)`. Relevant [code](https://github.com/ansible/ansible-modules-core/blob/cf88f2786822ab5f4a1cd711761a40df49bd93f0/files/lineinfile.py). After adding quotes line looks like `'something '""content""' something else'` and when passed to `module.safe_eval()` the Pythons's implicit string concatenation is applied and it looses all quotes. --- **Haven't checked if problem exists in development version, it seems module was significantly rewritten.** ",1,incorrect handling of quotes in lineinfile module issue type bug report component name lineinfile module ansible version summary i am using ansible i have following content in my test yml file yaml hosts all connection local tasks lineinfile dest example txt create yes line something content something else and i run it as follows ansible playbook i localhost test yml expected result example txt contains something content something else actual result example txt contains something content something else without any quotes investigation it seems the problem lies in the lineinfile module adding quotes and then executing line module safe eval line relevant after adding quotes line looks like something content something else and when passed to module safe eval the pythons s implicit string concatenation is applied and it looses all quotes haven t checked if problem exists in development version it seems module was significantly rewritten ,1 72875,31769573477.0,IssuesEvent,2023-09-12 10:53:30,gauravrs18/issue_onboarding,https://api.github.com/repos/gauravrs18/issue_onboarding,closed,"dev-angular-style-account-services-new-connection-component-connect-component -consumer-details-component -application-component -payment-component",CX-account-services,"dev-angular-style-account-services-new-connection-component-connect-component -consumer-details-component -application-component -payment-component",1.0,"dev-angular-style-account-services-new-connection-component-connect-component -consumer-details-component -application-component -payment-component - dev-angular-style-account-services-new-connection-component-connect-component -consumer-details-component -application-component -payment-component",0,dev angular style account services new connection component connect component consumer details component application component payment component dev angular style account services new connection component connect component consumer details component application component payment component,0 1872,6577498977.0,IssuesEvent,2017-09-12 01:20:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_vpc module erroneously recreates VPCs when passing loosely defined CIDR blocks,affects_2.1 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc module ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17. ##### SUMMARY When creating VPCs, AWS will automatically convert your subnet CIDR blocks to it's strictest representation (10.20.30.0/16 will be converted to 10.20.0.0/16), however, when performing checks (beginning line 193 of ec2_vpc.py) to determine if the VPC needs to be modified, Ansible uses the representation provided by the user, which can differ from the representation returned by AWS, in this case, a new VPC will be erroneously created for each subsequent playbook run. ##### STEPS TO REPRODUCE Save the following playbook as ec2_vpc-test.yml and run it with ansible-playbook ec2_vpc-test.yml ``` --- - hosts: localhost tasks: - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" ``` ##### EXPECTED RESULTS I expect that only one VPC will be created regardless of how many times the playbook is run ##### ACTUAL RESULTS Two new, identical VPCs are created every time this playbook is run, despite no playbook changes being made. ``` [dwood@dawood-arch ansible]$ ansible-playbook ec2_vpc-test.yml [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAY [localhost] *************************************************************** TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 [dwood@dawood-arch ansible]$ ``` ",True,"ec2_vpc module erroneously recreates VPCs when passing loosely defined CIDR blocks - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc module ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17. ##### SUMMARY When creating VPCs, AWS will automatically convert your subnet CIDR blocks to it's strictest representation (10.20.30.0/16 will be converted to 10.20.0.0/16), however, when performing checks (beginning line 193 of ec2_vpc.py) to determine if the VPC needs to be modified, Ansible uses the representation provided by the user, which can differ from the representation returned by AWS, in this case, a new VPC will be erroneously created for each subsequent playbook run. ##### STEPS TO REPRODUCE Save the following playbook as ec2_vpc-test.yml and run it with ansible-playbook ec2_vpc-test.yml ``` --- - hosts: localhost tasks: - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" - name: ""Create VPC"" local_action: module: ec2_vpc state: present cidr_block: ""10.20.30.0/16"" resource_tags: Name: 'ec2_vpc subnet test' region: ""eu-west-1"" ``` ##### EXPECTED RESULTS I expect that only one VPC will be created regardless of how many times the playbook is run ##### ACTUAL RESULTS Two new, identical VPCs are created every time this playbook is run, despite no playbook changes being made. ``` [dwood@dawood-arch ansible]$ ansible-playbook ec2_vpc-test.yml [WARNING]: Host file not found: /etc/ansible/hosts [WARNING]: provided hosts list is empty, only localhost is available PLAY [localhost] *************************************************************** TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] TASK [Create VPC] ************************************************************** changed: [localhost -> localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 [dwood@dawood-arch ansible]$ ``` ",1, vpc module erroneously recreates vpcs when passing loosely defined cidr blocks issue type bug report component name vpc module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment host os is arch linux i m building infrastructure in aws using boto version and aws cli version summary when creating vpcs aws will automatically convert your subnet cidr blocks to it s strictest representation will be converted to however when performing checks beginning line of vpc py to determine if the vpc needs to be modified ansible uses the representation provided by the user which can differ from the representation returned by aws in this case a new vpc will be erroneously created for each subsequent playbook run steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used save the following playbook as vpc test yml and run it with ansible playbook vpc test yml hosts localhost tasks name create vpc local action module vpc state present cidr block resource tags name vpc subnet test region eu west name create vpc local action module vpc state present cidr block resource tags name vpc subnet test region eu west expected results i expect that only one vpc will be created regardless of how many times the playbook is run actual results two new identical vpcs are created every time this playbook is run despite no playbook changes being made ansible playbook vpc test yml host file not found etc ansible hosts provided hosts list is empty only localhost is available play task changed task changed play recap localhost ok changed unreachable failed ,1 285239,31142886619.0,IssuesEvent,2023-08-16 02:32:31,GarySegal-Mend-Demo/WebGoat,https://api.github.com/repos/GarySegal-Mend-Demo/WebGoat,opened,underscore-min-1.10.2.js: 1 vulnerabilities (highest severity is: 7.2),Mend: dependency security vulnerability,"
Vulnerable Library - underscore-min-1.10.2.js

JavaScript's functional programming helper library.

Library home page: https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.10.2/underscore-min.js

Path to vulnerable library: /src/main/resources/webgoat/static/js/libs/underscore-min.js,/target/classes/webgoat/static/js/libs/underscore-min.js

Found in HEAD commit: 87421051d6f7cef363fd889e6918ec20bc253fb4

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (underscore-min version) | Remediation Possible** | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-23358](https://www.mend.io/vulnerability-database/CVE-2021-23358) | High | 7.2 | underscore-min-1.10.2.js | Direct | underscore - 1.12.1,1.13.0-2 | ❌ |

**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation

## Details
CVE-2021-23358 ### Vulnerable Library - underscore-min-1.10.2.js

JavaScript's functional programming helper library.

Library home page: https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.10.2/underscore-min.js

Path to vulnerable library: /src/main/resources/webgoat/static/js/libs/underscore-min.js,/target/classes/webgoat/static/js/libs/underscore-min.js

Dependency Hierarchy: - :x: **underscore-min-1.10.2.js** (Vulnerable Library)

Found in HEAD commit: 87421051d6f7cef363fd889e6918ec20bc253fb4

Found in base branch: main

### Vulnerability Details

The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.

Publish Date: 2021-03-29

URL: CVE-2021-23358

### CVSS 3 Score Details (7.2)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358

Release Date: 2021-03-29

Fix Resolution: underscore - 1.12.1,1.13.0-2

",True,"underscore-min-1.10.2.js: 1 vulnerabilities (highest severity is: 7.2) -
Vulnerable Library - underscore-min-1.10.2.js

JavaScript's functional programming helper library.

Library home page: https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.10.2/underscore-min.js

Path to vulnerable library: /src/main/resources/webgoat/static/js/libs/underscore-min.js,/target/classes/webgoat/static/js/libs/underscore-min.js

Found in HEAD commit: 87421051d6f7cef363fd889e6918ec20bc253fb4

## Vulnerabilities | CVE | Severity | CVSS | Dependency | Type | Fixed in (underscore-min version) | Remediation Possible** | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-23358](https://www.mend.io/vulnerability-database/CVE-2021-23358) | High | 7.2 | underscore-min-1.10.2.js | Direct | underscore - 1.12.1,1.13.0-2 | ❌ |

**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation

## Details
CVE-2021-23358 ### Vulnerable Library - underscore-min-1.10.2.js

JavaScript's functional programming helper library.

Library home page: https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.10.2/underscore-min.js

Path to vulnerable library: /src/main/resources/webgoat/static/js/libs/underscore-min.js,/target/classes/webgoat/static/js/libs/underscore-min.js

Dependency Hierarchy: - :x: **underscore-min-1.10.2.js** (Vulnerable Library)

Found in HEAD commit: 87421051d6f7cef363fd889e6918ec20bc253fb4

Found in base branch: main

### Vulnerability Details

The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.

Publish Date: 2021-03-29

URL: CVE-2021-23358

### CVSS 3 Score Details (7.2)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High

For more information on CVSS3 Scores, click here.

### Suggested Fix

Type: Upgrade version

Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358

Release Date: 2021-03-29

Fix Resolution: underscore - 1.12.1,1.13.0-2

",0,underscore min js vulnerabilities highest severity is vulnerable library underscore min js javascript s functional programming helper library library home page a href path to vulnerable library src main resources webgoat static js libs underscore min js target classes webgoat static js libs underscore min js found in head commit a href vulnerabilities cve severity cvss dependency type fixed in underscore min version remediation possible high underscore min js direct underscore in some cases remediation pr cannot be created automatically for a vulnerability despite the availability of remediation details cve vulnerable library underscore min js javascript s functional programming helper library library home page a href path to vulnerable library src main resources webgoat static js libs underscore min js target classes webgoat static js libs underscore min js dependency hierarchy x underscore min js vulnerable library found in head commit a href found in base branch main vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code injection via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution underscore ,0 37232,12473809401.0,IssuesEvent,2020-05-29 08:31:22,Kalskiman/gentelella,https://api.github.com/repos/Kalskiman/gentelella,opened,CVE-2017-16113 (High) detected in parsejson-0.0.3.tgz,security vulnerability,"## CVE-2017-16113 - High Severity Vulnerability
Vulnerable Library - parsejson-0.0.3.tgz

Method that parses a JSON string and returns a JSON object

Library home page: https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz

Path to dependency file: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/package.json

Path to vulnerable library: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/node_modules/parsejson/package.json

Dependency Hierarchy: - karma-1.7.1.tgz (Root Library) - socket.io-1.7.3.tgz - socket.io-client-1.7.3.tgz - engine.io-client-1.8.3.tgz - :x: **parsejson-0.0.3.tgz** (Vulnerable Library)

Found in HEAD commit: 0736072b46adcf2ceef588bb8660b4851929bc43

Vulnerability Details

The parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed.

Publish Date: 2018-06-07

URL: CVE-2017-16113

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2017-16113 (High) detected in parsejson-0.0.3.tgz - ## CVE-2017-16113 - High Severity Vulnerability
Vulnerable Library - parsejson-0.0.3.tgz

Method that parses a JSON string and returns a JSON object

Library home page: https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz

Path to dependency file: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/package.json

Path to vulnerable library: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/node_modules/parsejson/package.json

Dependency Hierarchy: - karma-1.7.1.tgz (Root Library) - socket.io-1.7.3.tgz - socket.io-client-1.7.3.tgz - engine.io-client-1.8.3.tgz - :x: **parsejson-0.0.3.tgz** (Vulnerable Library)

Found in HEAD commit: 0736072b46adcf2ceef588bb8660b4851929bc43

Vulnerability Details

The parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed.

Publish Date: 2018-06-07

URL: CVE-2017-16113

CVSS 3 Score Details (7.5)

Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High

For more information on CVSS3 Scores, click here.

*** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in parsejson tgz cve high severity vulnerability vulnerable library parsejson tgz method that parses a json string and returns a json object library home page a href path to dependency file tmp ws ua dvxliq archiveextraction fgllen ws scm depth gentelella vendors flot flot package package json path to vulnerable library tmp ws ua dvxliq archiveextraction fgllen ws scm depth gentelella vendors flot flot package node modules parsejson package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x parsejson tgz vulnerable library found in head commit a href vulnerability details the parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource ,0 1219,9695259517.0,IssuesEvent,2019-05-24 21:43:41,MakeAWishFoundation/SwiftyMocky,https://api.github.com/repos/MakeAWishFoundation/SwiftyMocky,opened,Extract integration tests into separate repository,automation enhancement,"We are now testing generation and test suite for swift 4.2 and 5.0, on iOS and tvOS. But we are missing integration tests for SwiftyMocky: - as a unit tests pod - as a UI tests pod - as a prototyping pod in main app - carthage integration for test targets - carthage integration for app targets We could extract additional repository and link it to the Travis, in order to maintain test projects with separate (integration only) test suite.",1.0,"Extract integration tests into separate repository - We are now testing generation and test suite for swift 4.2 and 5.0, on iOS and tvOS. But we are missing integration tests for SwiftyMocky: - as a unit tests pod - as a UI tests pod - as a prototyping pod in main app - carthage integration for test targets - carthage integration for app targets We could extract additional repository and link it to the Travis, in order to maintain test projects with separate (integration only) test suite.",0,extract integration tests into separate repository we are now testing generation and test suite for swift and on ios and tvos but we are missing integration tests for swiftymocky as a unit tests pod as a ui tests pod as a prototyping pod in main app carthage integration for test targets carthage integration for app targets we could extract additional repository and link it to the travis in order to maintain test projects with separate integration only test suite ,0 19944,14766587323.0,IssuesEvent,2021-01-10 01:08:13,NCAR/VAPOR,https://api.github.com/repos/NCAR/VAPOR,reopened,Disagreement between TF widget and Colorbar,High Usability,"In this case, I created a slice renderer for `dbz` for Lee Orf's tornado dataset with the default TF and added a colorbar. ![screen shot 2019-01-24 at 10 20 05 pm](https://user-images.githubusercontent.com/2772687/51726774-46316880-2026-11e9-9023-56ecdbd67a6d.png) ",True,"Disagreement between TF widget and Colorbar - In this case, I created a slice renderer for `dbz` for Lee Orf's tornado dataset with the default TF and added a colorbar. ![screen shot 2019-01-24 at 10 20 05 pm](https://user-images.githubusercontent.com/2772687/51726774-46316880-2026-11e9-9023-56ecdbd67a6d.png) ",0,disagreement between tf widget and colorbar in this case i created a slice renderer for dbz for lee orf s tornado dataset with the default tf and added a colorbar ,0 1822,6577329897.0,IssuesEvent,2017-09-12 00:09:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,IAM user can not go from N to 0 groups.,affects_2.0 aws bug_report cloud waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/iam ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/sbrady/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat ~/.ansible.cfg [defaults] nocows=1 [ssh_connection] pipelining = True ``` ##### OS / ENVIRONMENT Linux but issue should not be platform specific. ##### SUMMARY When trying to change group membership of a user from one or more groups, to no groups, no groups are changed. ##### STEPS TO REPRODUCE ``` iam: iam_type: ""user"" name: ""joe"" groups: [] ``` ##### EXPECTED RESULTS Expected ""joe"" to no longer be in the ""foo"" group. ##### ACTUAL RESULTS ""joe"" remained in the ""foo"" group. Examining the code, I see the issue. https://github.com/ansible/ansible-modules-core/blob/a8e5f27b2c27eabc3a9fff9c3719da6ea1fb489d/cloud/amazon/iam.py#L683 The module uses `if groups:`, where groups is a list. Any empty list (""I want this user to be in no groups"") will evaluate to `False`, and therefore the block will not execute. I believe the author meant to check if the parameter had been passed at all. Please advise if I am mis-using the module, or can provide more information. Thanks. ",True,"IAM user can not go from N to 0 groups. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/amazon/iam ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /home/sbrady/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` $ cat ~/.ansible.cfg [defaults] nocows=1 [ssh_connection] pipelining = True ``` ##### OS / ENVIRONMENT Linux but issue should not be platform specific. ##### SUMMARY When trying to change group membership of a user from one or more groups, to no groups, no groups are changed. ##### STEPS TO REPRODUCE ``` iam: iam_type: ""user"" name: ""joe"" groups: [] ``` ##### EXPECTED RESULTS Expected ""joe"" to no longer be in the ""foo"" group. ##### ACTUAL RESULTS ""joe"" remained in the ""foo"" group. Examining the code, I see the issue. https://github.com/ansible/ansible-modules-core/blob/a8e5f27b2c27eabc3a9fff9c3719da6ea1fb489d/cloud/amazon/iam.py#L683 The module uses `if groups:`, where groups is a list. Any empty list (""I want this user to be in no groups"") will evaluate to `False`, and therefore the block will not execute. I believe the author meant to check if the parameter had been passed at all. Please advise if I am mis-using the module, or can provide more information. Thanks. ",1,iam user can not go from n to groups issue type bug report component name cloud amazon iam ansible version ansible config file home sbrady ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables cat ansible cfg nocows pipelining true os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux but issue should not be platform specific summary when trying to change group membership of a user from one or more groups to no groups no groups are changed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used iam iam type user name joe groups expected results expected joe to no longer be in the foo group actual results joe remained in the foo group examining the code i see the issue the module uses if groups where groups is a list any empty list i want this user to be in no groups will evaluate to false and therefore the block will not execute i believe the author meant to check if the parameter had been passed at all please advise if i am mis using the module or can provide more information thanks ,1 1682,6574154006.0,IssuesEvent,2017-09-11 11:43:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,EC2_ASG: Support NewInstancesProtectedFromScaleIn parameter,affects_2.3 aws cloud feature_idea waiting_on_maintainer,"##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = configured module search path = Default w/o overrides ``` ##### SUMMARY see http://boto3.readthedocs.io/en/latest/reference/services/autoscaling.html#AutoScaling.Client.create_auto_scaling_group parameter NewInstancesProtectedFromScaleIn is currently unsupported ##### STEPS TO REPRODUCE ``` - ec2_asg: name: myasg launch_config_name: my_new_lc health_check_period: 60 health_check_type: ELB min_size: 5 max_size: 5 desired_capacity: 5 region: us-east-1 new_instances_protected_from_scale_in: true | false ``` ##### EXPECTED RESULTS param to be taken into account",True,"EC2_ASG: Support NewInstancesProtectedFromScaleIn parameter - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION ``` ansible 2.3.0 config file = configured module search path = Default w/o overrides ``` ##### SUMMARY see http://boto3.readthedocs.io/en/latest/reference/services/autoscaling.html#AutoScaling.Client.create_auto_scaling_group parameter NewInstancesProtectedFromScaleIn is currently unsupported ##### STEPS TO REPRODUCE ``` - ec2_asg: name: myasg launch_config_name: my_new_lc health_check_period: 60 health_check_type: ELB min_size: 5 max_size: 5 desired_capacity: 5 region: us-east-1 new_instances_protected_from_scale_in: true | false ``` ##### EXPECTED RESULTS param to be taken into account",1, asg support newinstancesprotectedfromscalein parameter issue type feature idea component name asg ansible version ansible config file configured module search path default w o overrides summary see parameter newinstancesprotectedfromscalein is currently unsupported steps to reproduce asg name myasg launch config name my new lc health check period health check type elb min size max size desired capacity region us east new instances protected from scale in true false expected results param to be taken into account,1 68925,29929260290.0,IssuesEvent,2023-06-22 08:19:16,flipperdevices/flipperzero-firmware,https://api.github.com/repos/flipperdevices/flipperzero-firmware,closed,Add a Keypad Lock mode to dummy mode in GUI,Feature Request Core+Services,"### Describe the enhancement you're suggesting. It would be nice if it were possible to implement the Keypad Lock functionality even on the dummy mode, so that if a person has possession of your Flipper Zero, to play on it, it doesn't have the possibility to enter into the brainac mode for pentest or malicious purpose. ### Anything else? _No response_",1.0,"Add a Keypad Lock mode to dummy mode in GUI - ### Describe the enhancement you're suggesting. It would be nice if it were possible to implement the Keypad Lock functionality even on the dummy mode, so that if a person has possession of your Flipper Zero, to play on it, it doesn't have the possibility to enter into the brainac mode for pentest or malicious purpose. ### Anything else? _No response_",0,add a keypad lock mode to dummy mode in gui describe the enhancement you re suggesting it would be nice if it were possible to implement the keypad lock functionality even on the dummy mode so that if a person has possession of your flipper zero to play on it it doesn t have the possibility to enter into the brainac mode for pentest or malicious purpose anything else no response ,0 1661,6574048167.0,IssuesEvent,2017-09-11 11:14:50,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ios_config: multiline ip sla does not correctly handle escaped URLs,affects_2.2 bug_report networking waiting_on_maintainer," ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT Ubuntu 16.04 managing Cisco 2901 Router ##### SUMMARY Trying to script pushing a multi-line ""ip sla"" stanza and it appears to not handle escaped strings (i.e. URLs) correctly. ##### STEPS TO REPRODUCE Desired IOS config (FQDN has been changed): ``` ip sla 1000 http raw http://www.example.com/data/lib/10k.txt http-raw-request GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n \r\n exit ``` Using this playbook task: ``` - name: CREATE WORKING IP SLA ios_config: provider: ""{{ provider }}"" authorize: yes lines: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" - ""GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n"" - ""\r\n"" - exit parents: ip sla 1000 after: ""ip sla schedule 1000 life forever start-time now"" ``` ##### EXPECTED RESULTS I can successfully push the following generic ""ip sla http get"" example: ``` ip sla 1000 http get http://www.ibm.com/data/lib/10k.txt ip sla schedule 1000 start-time now ``` ##### ACTUAL RESULTS From failed multi-line ""ip sla"" example: ``` root@playground:/etc/ansible/net-eng# ansible-playbook -vvvv ip_sla.yaml Using /etc/ansible/ansible.cfg as config file ERROR! Syntax Error while loading YAML. The error appears to have been in '/etc/ansible/net-eng/ip_sla.yaml': line 42, column 1, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" ^ here ``` ",True,"ios_config: multiline ip sla does not correctly handle escaped URLs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_config ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None. ##### OS / ENVIRONMENT Ubuntu 16.04 managing Cisco 2901 Router ##### SUMMARY Trying to script pushing a multi-line ""ip sla"" stanza and it appears to not handle escaped strings (i.e. URLs) correctly. ##### STEPS TO REPRODUCE Desired IOS config (FQDN has been changed): ``` ip sla 1000 http raw http://www.example.com/data/lib/10k.txt http-raw-request GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n \r\n exit ``` Using this playbook task: ``` - name: CREATE WORKING IP SLA ios_config: provider: ""{{ provider }}"" authorize: yes lines: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" - ""GET http://www.example.com/data/lib/10k.txt HTTP/1.0\r\n"" - ""\r\n"" - exit parents: ip sla 1000 after: ""ip sla schedule 1000 life forever start-time now"" ``` ##### EXPECTED RESULTS I can successfully push the following generic ""ip sla http get"" example: ``` ip sla 1000 http get http://www.ibm.com/data/lib/10k.txt ip sla schedule 1000 start-time now ``` ##### ACTUAL RESULTS From failed multi-line ""ip sla"" example: ``` root@playground:/etc/ansible/net-eng# ansible-playbook -vvvv ip_sla.yaml Using /etc/ansible/ansible.cfg as config file ERROR! Syntax Error while loading YAML. The error appears to have been in '/etc/ansible/net-eng/ip_sla.yaml': line 42, column 1, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - ""http raw http://www.example.com/data/lib/10k.txt"" - ""http-raw-request"" ^ here ``` ",1,ios config multiline ip sla does not correctly handle escaped urls issue type bug report component name ios config ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none os environment ubuntu managing cisco router summary trying to script pushing a multi line ip sla stanza and it appears to not handle escaped strings i e urls correctly steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used desired ios config fqdn has been changed ip sla http raw http raw request get http r n r n exit using this playbook task name create working ip sla ios config provider provider authorize yes lines http raw http raw request get http r n r n exit parents ip sla after ip sla schedule life forever start time now expected results i can successfully push the following generic ip sla http get example ip sla http get ip sla schedule start time now actual results from failed multi line ip sla example root playground etc ansible net eng ansible playbook vvvv ip sla yaml using etc ansible ansible cfg as config file error syntax error while loading yaml the error appears to have been in etc ansible net eng ip sla yaml line column but may be elsewhere in the file depending on the exact syntax problem the offending line appears to be http raw http raw request here ,1 455439,13126831010.0,IssuesEvent,2020-08-06 09:15:51,The-Codin-Hole/HotWired-Bot,https://api.github.com/repos/The-Codin-Hole/HotWired-Bot,closed,"Cant Launch Bot From start.py, 'module' object is not callable",priority: 1 - high type: bug,"[`start.py`](https://github.com/The-Codin-Hole/HotWired-Bot/blob/f2be6e60742bcb387e80fe40131fc8d0a5f8216a/start.py) file doesn't start the bot properly and raises an exception: ``` Traceback (most recent call last): File ""G:\New Downloads\HotWired-Bot\start.py"", line 5, in main() TypeError: 'module' object is not callable ```",1.0,"Cant Launch Bot From start.py, 'module' object is not callable - [`start.py`](https://github.com/The-Codin-Hole/HotWired-Bot/blob/f2be6e60742bcb387e80fe40131fc8d0a5f8216a/start.py) file doesn't start the bot properly and raises an exception: ``` Traceback (most recent call last): File ""G:\New Downloads\HotWired-Bot\start.py"", line 5, in main() TypeError: 'module' object is not callable ```",0,cant launch bot from start py module object is not callable file doesn t start the bot properly and raises an exception traceback most recent call last file g new downloads hotwired bot start py line in main typeerror module object is not callable ,0 270520,8461352422.0,IssuesEvent,2018-10-22 21:33:01,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,preev.com - site is not usable,browser-firefox priority-normal," **URL**: http://preev.com/ **Browser / Version**: Firefox 64.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: Website is not usable in mobile browser. Try to click on input field and it will not work. **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2018/10/e4325466-4761-47ea-b249-51628cb07b3f-thumb.jpeg)](https://webcompat.com/uploads/2018/10/e4325466-4761-47ea-b249-51628cb07b3f.jpeg)
Browser Configuration
    {u'mixed active content blocked': False, u'buildID': u'20181015220336', u'hasTouchScreen': False, u'tracking content blocked': u'false', u'consoleLog': [u'[JavaScript Warning: ""The resource at http://www.google-analytics.com/ga.js was blocked because content blocking is enabled."" {file: ""http://preev.com/"" line: 0}]', u'[JavaScript Warning: ""Loading failed for the